238
Views
0
CrossRef citations to date
0
Altmetric
Information & Communications Technology in Education

Effective questioning strategies in online videos: evidence based on electroencephalogram data

ORCID Icon, ORCID Icon, &
Article: 2332838 | Received 12 Jan 2024, Accepted 14 Mar 2024, Published online: 25 Mar 2024

Abstract

Online videos are a popular means of imparting education. This study investigated the effects of different questioning strategies used in online videos on learners’ attention levels, as well as the mediating effect of attention levels on the relationship between questioning strategies and learning performance. One hundred students from a Chinese University were randomly assigned videos with one of five questioning strategies: the pre-leading questioning (PLQ), middle-enhancing questioning (MEQ), and post-assessment questioning (PAQ) strategies as well as a combination of PLQ and MEQ and PLQ and PAQ. By using an electroencephalogram (EEG) to measure the learners’ brainwaves, this study found that embedding questions in online videos could increase learners’ attention levels. The results demonstrated that learners exposed to a combination of the two questioning strategies paid better attention than those exposed to a single strategy. Furthermore, attention level was found to be the only mediator in the relationship between the PLQ + MEQ strategy and learning performance while it played a suppressive role in the relationship between the PLQ + PAQ strategy and learning performance. These findings have significant implications for education. Instructors should design questions for online videos based on their teaching objectives at a given stage and consider the potentially negative consequences of other factors (such as cognitive load) when using multiple questioning strategies in the same video.

1. Introduction

Online videos are a popular means of teaching learners of all ages (Jung & Lee, Citation2018). However, students’ attention span drops within 10–15 minutes in a typical classroom and even more quickly when watching online videos (Wilson and Korn, Citation2007). Thus, the issue of effective learning from online videos has received considerable attention from researchers and educators. Although embedded questions in online videos can improve learners’ attention levels, attitude, and academic performance (Mitchell, Citation1997; Shelton, Warren & Archambault, Citation2016), no systematic comparison has been performed regarding the effect of different types of questioning strategies on learning performance and attention when watching online videos.

1.1. The relationship between questioning strategies and attention level

Interactive videos with embedded questions can help boost students’ attention levels while avoiding passive acceptance of knowledge (Kolås et al., Citation2016). Szpunar et al. (Citation2013) demonstrated that interpolating online courses with memory tests could help students sustain their attention span on the course content and thus prevent distractions. Earlier findings have indicated that blended learning—which combines both online and face-to-face elements (questioning strategies)—can be most effective in capturing students’ attention (Leszczyński et al., Citation2018; Tang et al., Citation2020). However, when the technical pattern of questioning (such as the use of (embedded) quizzes, link-chains, interactive maps and interactive use of 3D-objects) is the same, it may distract learners and fail to consolidate learning when the learners view the video repeatedly (Kolås, Citation2015). Hence, this study aimed to examine how questioning strategies in online videos affect learners’ attention levels.

1.2. The relationship between questioning strategies and learning performance

Online videos with embedded questions could be effective in enhancing learning outcomes (Haagsman et al., Citation2020; Leisner et al., Citation2020; van der Meij & Bӧckmann, Citation2021). For example, using three different question-prompt strategies can promote metacognitive skills and performance in ill-structured problem-solving activities (Byun et al., Citation2014). Furthermore, a concept mapping-based question-posing approach can promote effective learning (Hwang et al., Citation2020). Wachtler and Ebner (Citation2015) analysed the impact of multiple-choice questions inserted at different positions in a video on students. They found that the interval between two question interactions should not be too brief; specifically, the maximum number of interactions per hour is approximately ten, and that questions should be evenly distributed throughout the video to achieve better learning outcomes. Recent research has found that embedding questions in pre-class videos effectively improves learning performance without significant effects on engagement, emotional investment, sustained attention, satisfaction, etc. (Deng et al., Citation2023; Deng & Gao, Citation2023; Polat & Taslibeyaz, Citation2023; Samaila & Al-Samarraie, Citation2023). To better understand how questioning strategies affect video-based learning, this study included learning performance as a variable.

1.3. The relationship between attention level and learning performance

The sustained attention span of learners has a strong and direct influence on their online learning performance (Chen & Wang, Citation2018; Wu, Citation2017). Kokoç et al. (Citation2020) found that different types of online videos (voice over, picture-in-picture, and screencast) and learners’ sustained attention levels (low and high) had a significant impact on learning performance. Similarly, Chen and Wang (Citation2018) developed a novel attention monitoring and alerting mechanism based on brainwave signals and found that learners’ sustained attention spans and attentional alert frequency strongly predicted their learning performance. Moreover, attention level can have an indirect moderating effect on learning performance. For example, online videos with embedded questions enable students to interact with knowledge. This can leverage their short-term attention levels while reinforcing their understanding of the topic (Azlan et al., Citation2020). Wu (Citation2017) suggested that media multitasking self-efficacy can indirectly predict course grades through perceived attention problems and self-regulated attention improvement strategies. Kokoç (Citation2021) found that self-regulated attention control played a mediating role in the relationship between social media multitasking and learning performance. Based on the above findings and the theory of the mediation model (Hoyle & Kenny, Citation1999), this study posited that learners’ attention levels would be a simple mediator between questioning strategies and learning performance.

1.4. The present study

This study used electroencephalogram (EEG, it is a non-invasive method used to record the electrical activity of the brain) technology to investigate the effects of different questioning strategies used in videos on learners’ attention levels. This study also examined the mediating effect of learners’ attention levels on the relationship between questioning strategies and learning performance using the PROCESS macro. In practice, questioning is not only regarded as one of the most popular teaching modes in classroom instruction, but also one of the most commonly used instructional strategies (Brualdi Timmins, Citation1998; Feng, Citation2014). In some real-world classes, more than half of class time is spent on question-and-answer exchanges (Pratama, Citation2019). In contrast, the presence of the teacher in online videos has less influence on learners, and it is difficult for the teacher to interact with students as freely as in a real classroom. Therefore, embedding questions in online videos is an effective way to re-establish meaningful relationships between learners and teachers.

Based on previous research and Gagne’s theory of the Nine Events of Instruction (NEI), this study developed a series of questioning strategies for online videos (Gagne, Citation1970). Since the NEI stages are closely related to learning conditions and provide an ideal teaching sequence (Bashir et al., Citation2021; Pratama, Citation2019), this study integrated them into three questioning strategies, each with unique instructional goals: (1) the pre-leading questioning (PLQ) strategy (from the first three stages of the NEI), which emphasises inserting questions at the beginning of a video to capture the learner’s attention and informing them about the learning objectives; (2) the middle-enhancing questioning (MEQ) strategy (from the middle three stages), which underscores inserting questions during the learning process to enhance cognitive encoding and promote long-term memory storage; and (3) the post-assessment questioning (PAQ) strategy (from the last three stages), which involves inserting questions at the end of a video to evaluate learning performance and promote retention and transfer. Moreover, to determine the feasibility and differentiation of more questioning strategies, this study established two hybrid questioning strategies (i.e. a combination of the PLQ + MEQ strategies and PLQ + PAQ strategies). More details on the design and differentiation of the questioning strategies can be found in Section 2.3.2.

Hence, this study aimed to examine how questioning strategies in online videos affect learners’ attention levels to better understand how questioning strategies affect video-based learning. This study included learning performance as a variable, and posited that attention level would be a simple mediator between questioning strategies and learning performance embedding questions in online videos is an effective way to re-establish meaningful relationships between learners and teachers. This study comprehensively considered the effectiveness of questioning strategies with regard to learners’ attention retention and learning performance, as stated in the following hypotheses:

H1: There will be a statistically significant difference in the attention levels of students exposed to the experimental conditions.

H2: Attention level plays a mediating role in the relationship between questioning strategies and learning performance.

2. Materials and methods

2.1. Participants and design

The study recruited 100 volunteer freshmen (67 females, 33 males; age range: 17–20 years old) majoring in educational technology from a university in China. Participants provided written informed consent and received small gifts as compensation. The university ethics committee approved the study protocol. Participants were randomly assigned to one of five strategies: (1) PLQ (n = 20), (2) MEQ (n = 20), (3) PAQ (n = 20), (4) PLQ + MEQ (n = 20), or (5) PLQ + PAQ (n = 20). The final sample size was 85, excluding participants who did not complete online video learning or who had incomplete EEG data records. Only participants who completed online video learning and had complete EEG data records were included in the sample.

2.2. Apparatus and EEG data analysis

Recent advances in low-cost and portable EEG technology have enabled the collection of brain data in real-world classrooms (Bevilacqua et al., Citation2019; Poulsen et al., Citation2017). In the present study, EEG data were recorded using a non-invasive EEG head loop (see ). The main purpose of the real-time brainwave monitoring system was to monitor the change in α/β values. Subsequently, an artificial intelligence algorithm was used to compile the collected EEG signals into a range of 0–100 in order to provide a unified evaluation standard for users with different EEG activities. In this regard, a value of 0–35 indicates that the learner’s mind was wandering, 36–65 suggests that the learner’s attention level was average but unfocused, and 66–100 implies that the learner’s attention level was highly focused. Thus, this study defined three levels of attention states (high attention: 65–100; average attention: 36–64; and low attention: 0–35). To measure how questioning strategies affect learners’ attention span, this study further calculated their high attention percentages and mean attention scores.

Figure 1. EEG data collection and analysis of attention values.

Figure 1. EEG data collection and analysis of attention values.

2.3. Materials

2.3.1. Online videos

The online video was an excellent course from China University MOOC platform, whose topic was ‘teaching methods’. The online video contained declarative knowledge that emphasised the mental process of memorisation performed by the learner. This study reprocessed the video to create five new videos using different questioning strategies. In each scenario, the same male instructor provided the same information about the teaching methods (a total of four knowledge points, including the definition of teaching methods and a detailed introduction to the three types of teaching methods) and used the same slides. The videos were created using Camtasia 2020, a powerful screen recording and video editing software that integrates interactive elements like questions and quizzes directly into the video content. The videos was then output in HTML format, chosen as it is compatible with a wide range of web browsers and devices. Each video lasted approximately 27 minutes.

2.3.2. Design of the questioning strategies

This study designed different forms of questions for each questioning strategy. Participants viewed an online video representing one of the five strategies (see ):

Figure 2. Screenshots of the five strategies and their design.

Figure 2. Screenshots of the five strategies and their design.
  1. PLQ: There were four lead-in questions without feedback at the beginning of the video (e.g. What is a teaching method?).

  2. MEQ: Each knowledge point in the video was followed by two enhancement questions along with feedback (a total of eight items) (e.g. What is the teacher’s method of describing the material or depicting the object to the students? A. narrative method; B. lecture-reading method; C. explanation method; D. presentation method).

  3. PAQ: There were eight assessment questions at the end of all knowledge points.

  4. PLQ + MEQ: All questions corresponded to the PLQ and MEQ strategies.

  5. PLQ + PAQ: All questions corresponded to the PLQ and PAQ strategies.

The questions in the MEQ and PAQ strategies were exactly the same; however, they appeared at different locations in the video.

2.3.3. Measurements

2.3.3.1. Prior knowledge test

The prior knowledge test was designed based on video materials and teaching method books to evaluate the participants’ prior knowledge of teaching methods. The test included one open-ended question (worth two points), five single-choice questions (worth five points), three multiple-choice questions (worth six points), and four judgment items (worth four points); the total score was the sum of all items, with a maximum possible score of 17. The prior knowledge test showed moderate internal consistency (α = 0.72). There was no significant difference in prior knowledge among the five strategies (p = 0.64).

2.3.3.2. Learning performance test

Two experienced teachers developed a learning performance test to evaluate the participants’ mastery of key concepts. It included one open-ended question, six single-choice questions, five multiple-choice questions, and six judgment items, which were scored in the same manner as the prior knowledge test. The maximum possible score was 26. The learning performance test showed moderate internal consistency (α = 0.77).

2.4. Procedure

The study was conducted in an online classroom for approximately 65 minutes. All participants completed a prior knowledge test and were asked to wear the EEG apparatus. The participants were randomly assigned to one of the five conditions: PLQ, MEQ, PAQ, PLQ + MEQ, or PLQ + PAQ. The participants engaged in individual viewing of the online videos without the option to pause the video or engage in generative learning activities (e.g. note taking or online searching). Subsequently, they proceeded to complete the learning performance test immediately after the video session. During this period, all the participants’ actions were recorded on video (see ).

Figure 3. Diagram of the experiment’s design.

Figure 3. Diagram of the experiment’s design.

3. Results

Descriptive statistics for all variables are shown in . Continuous variables are presented as the mean and standard deviation (SD) if normally distributed and as the median and interquartile range (IQR) if non-normally distributed (via the Shapiro–Wilk test). This study performed the Kruskal-Wallis H test and one-way analysis of variance (ANOVA) on high attention percentage and mean attention scores, respectively, with the experimental condition (PLQ vs. MEQ vs. PAQ vs. PLQ + MEQ vs. PLQ + PAQ) as the between-subjects factor.

Table 1. Descriptive statistics of all variables.

3.1. Effects of different questioning strategies on attention level

To examine the effect of different questioning strategies on the participants’ attention levels (H1), we compared the high attention percentage and mean attention scores across the five strategies.

3.1.1. High attention percentage

The results obtained by the parametric test (one-way ANOVA) were not feasible due to the non-normal distribution of the outcomes (WPLQ (15) = 0.896, p = 0.141; WMEQ (15) = 0.891, p = 0.069; WPAQ (15) = 0.848, p = 0.021; WPLQ+MEQ (20) = 0.910, p = 0.074; WPLQ+PAQ (20) = 0.868, p = 0.011). The Kruskal–Wallis H test showed that the differences were statistically significant (H(4) = 15.555, p = 0.004). Post-hoc tests revealed that the PLQ + MEQ and PLQ + PAQ strategies had statistically significant differences as compared to the PLQ and MEQ strategies (all p < 0.05, see ). There was also a marginally significant difference between the MEQ strategy, PLQ + MEQ (p = 0.065), and PLQ + PAQ (p = 0.062). In other words, learners who watched the video with two hybrid questioning strategies paid significantly more attention than those who watched the video with a single questioning strategy. No other significant differences were observed in this study (p > .05).

Figure 4. Differences in high attention percentage across the five strategies.

Note: *p < .05.

Figure 4. Differences in high attention percentage across the five strategies.Note: *p < .05.

3.1.2. Mean attention score

There was a statistically significant difference in mean attention scores across the five strategies (F(4, 80) = 5.35, p = 0.001, η2 = 0.22). Moreover, post-hoc tests (LSD) revealed that participants who were exposed to the PLQ + MEQ and PLQ + PAQ strategies had significantly higher scores than those exposed to the PLQ and MEQ strategies (all p < 0.05); participants who were exposed to the PLQ + PAQ strategy also had significantly higher scores than those exposed to the PAQ strategy (p = 0.028, see ). In contrast to the high attention percentage, there was no significant difference between the PAQ and PLQ + MEQ strategies in terms of the mean attention score (p = 0.096). No other significant differences were observed in this study (p > .05).

Figure 5. Differences in mean attention across the five strategies.

Note: *p < .05.

Figure 5. Differences in mean attention across the five strategies.Note: *p < .05.

3.1.3. Time plot

This study drew time plots to dynamically reflect the changing trend of attention levels in order to observe overall and local characteristics under the five conditions (). Generally, the learners’ attention levels were the highest under the PLQ + PAQ strategy, but the upward or downward trends of the five strategies at adjacent moments were basically the same. Furthermore, the attention levels of the three groups with lead-in questions (PLQ, PLQ + MEQ, and PLQ + PAQ) showed a gradual upward trend in the first five seconds of the video (), indicating that adding lead-in questions before the video could somewhat improve learners’ attention levels. As the learning time increased, the attention levels of the groups that saw videos without enhancement questions embedded in the middle of them (PLQ, PAQ, and PLQ + PAQ) gradually declined, whereas the attention levels of the groups that saw videos with enhancement questions embedded in the middle of them (MEQ and PLQ + MEQ) gradually increased (). Likewise, the attention levels of the groups that saw videos without assessment questions embedded afterward (PLQ, MEQ, and PLQ + MEQ) gradually declined, whereas the attention levels of the groups that saw videos with assessment questions embedded afterward (PAQ, PLQ + PAQ) gradually rose (). These results demonstrate the effectiveness of embedding questioning strategies in online videos to enhance learners’ attention levels. Surprisingly, based on the overall trend in , the mean attention scores of learners exposed to the PLQ strategy appeared to be higher that of learners exposed to the PLQ + MEQ and PLQ + PAQ strategies at a certain moment. Likewise, the mean attention scores of learners exposed to the MEQ strategy exceeded that of learners exposed to the PLQ + MEQ strategy at a certain moment, and the mean attention scores of learners exposed to the PAQ strategy exceeded that of learners exposed to the PLQ + PAQ strategy. These results indirectly suggest that the two hybrid questioning strategies may have induced a higher cognitive load on learners due to the amount and complexity of information that had to be processed compared to the single questioning strategies.

Figure 6. Time plot across the five strategies.

Note: Mean attention represents the average attention level of learners for different questioning strategies every five seconds.

Figure 6. Time plot across the five strategies.Note: Mean attention represents the average attention level of learners for different questioning strategies every five seconds.

Taken together, these findings suggest that the PLQ + MEQ and PLQ + PAQ strategies efficiently increased the participants’ attention percentages compared to the PLQ, MEQ, and PAQ strategies. Additionally, in terms of mean attention scores, learners who were exposed to the PLQ, MEQ, and PAQ strategies had lower scores than those who were exposed to the PLQ + PAQ strategy, while learners who were exposed to the PLQ and MEQ strategies had lower scores than those who experienced the PLQ + MEQ strategy. Generally, learners who were exposed to the PLQ + PAQ strategy had the best attention levels in terms of both a high attention percentage and mean attention scores. Surprisingly, for both the high attention percentage and mean attention scores, there were no differences between the PLQ + MEQ and PLQ + PAQ strategies and no differences among the PLQ, MEQ, and PAQ strategies. These results partially support H1.

3.2. The mediating effect of attention level on learning performance

To examine whether attention levels would mediate the relationship between questioning strategies and learning performance, this study performed a simple mediation analysis using Model 4 in SPSS PROCESS v4.1. This study assessed the indirect effect of attention levels using a bias-corrected and accelerated (BCa) bootstrapped confidence interval (CI) based on 5,000 samples. In this study’s mediation model, the different questioning strategies (PLQ, MEQ, PAQ, PLQ + MEQ, and PLQ + PAQ) were coded as dummy variables, whereas the attention level (the mediator) and learning performance (the dependent variable) were treated as continuous variables. The detailed outputs of the simple mediation model are presented in .

Table 2. The analysis results of the mediating effect of attention level on learning performance.

Using the PLQ group as a reference, the mediating effects of attention level on learning performance were 0.152 and 0.243 for the MEQ and PAQ groups, respectively. The 95% bias-corrected bootstrap CIs were –0.131 to 0.491 for MEQ and –0.081 to 0.692 for PAQ. In both cases, the 95% bootstrap CI was zero, indicating that attention level did not mediate the relationship between questioning strategies (including MEQ and PAQ) and learning performance.

However, the mediating effects of attention level on learning performance were 0.491 and 0.568 in the PLQ + MEQ and PLQ + PAQ groups, respectively. The 95% bias-corrected bootstrap CIs were 0.155–0.922 for PLQ + MEQ and 0.206–1.066 for PLQ + PAQ. In both cases, the CI did not include zero values. It is worth noting that, after adding attention level as a mediator, the direct effect of the PLQ + PAQ strategy on learning performance was –2.931, with a CI of –5.241 to –0.622 (excluding zero). These results suggest that attention level was the only mediator between the PLQ + MEQ strategy and learning performance, whereas it played a suppressive role in the relationship between the PLQ + PAQ strategy and learning performance. These findings partially support H2.

4. Discussion

This study used EEG technology to analyse students’ attention levels and investigated how different questioning strategies in online videos affected attention levels and learning performance from an instructional design perspective. This study also explored the mediating role of attention levels in the relationship between questioning strategies and learning outcomes. It was found that the adoption of questioning strategies enhanced attention levels to a certain degree. In particular, students who were exposed to the PLQ + PAQ strategy demonstrated higher attention levels. Moreover, attention levels had different mediating effects on the relationship between specific questioning strategies (PLQ + MEQ and PLQ + PAQ) and learning performance.

4.1. Attention level

Our findings are consistent with previous research studies (Cain et al., Citation2009; Chen & Yeh, Citation2019; Mayer, 2005; Mirriahi et al., Citation2021; Szpunar et al., Citation2013), which suggest that embedding questions in online videos can enhance learners’ attention levels. However, while previous studies have only compared the effects of a single questioning strategy on attention levels, this study systematically explored five questioning strategies and found that different types of questioning strategies affect attention levels differently.

First, a noteworthy point of our results is that combining two questioning strategies can significantly improve students’ overall attention levels compared to single questioning strategies. This has not been observed in prior research. Another interesting finding is that the overall attention levels are more stable for the MEQ strategy. This outcome matches and clarifies the assumptions of relevant prior research. Kuo et al. (Citation2017), Rice et al. (Citation2019), and Wachtler and Ebner (2015) stressed that the most effective placement of questions for short-term memory is to embed them throughout a video, rather than grouping them together at the end. In our study, although the attention variables (including high attention percentage and mean attention scores) for the MEQ strategy were somewhat low among all the strategies, it had the smallest IQR for high attention percentage and the lowest standard deviation for mean attention scores (). This indicates that despite not having the highest attention level, learners showed a more stable and similar sustained attention level when exposed to the MEQ strategy.

Moreover, there were times when learners’ attention levels were greater when exposed to single strategies as compared to combined questioning strategies. This may be due to the greater amount of information presented in the latter. According to the Cognitive Load Theory, students’ working memory capacity is limited, and their attention can only be allocated to a small fraction of the incoming information at a time (Baddeley, Citation1992). Information overload can result in a heavy cognitive load on learners (Sweller, Citation1994). Chen et al. (Citation2021) suggested that learners’ cognitive load was higher when the number of items was high and presentation time was short. If the questioning strategy in the video is not meaningfully designed, its positive effect on attention levels may diminish, leading to a decrease in learners’ interest and may even cause them to miss important information in the video (Cain et al., Citation2009). This may explain why combining two questioning strategies can boost students’ overall attention performance. However, simply superimposing them may increase the cognitive load and result in a lower learning performance. Hence, this study suggests embedding two hybrid questioning strategies in videos in the form of interactive elements as scaffolding to create a higher-quality interactive learning environment for learners and stimulate changes in their attention levels. Simultaneously, the negative effects of the cognitive load should be considered.

4.2. Mediating effect

MacKinnon et al. (Citation2000) identified three types of third-variable effects (mediation, confounding, and suppression) to clarify the nature of the relationship between independent and dependent variables. Contrary to expectations, our findings imply that learners’ attention levels do not mediate the relationship between single-question strategies (PLQ, MEQ, and PAQ) and learning performance. However, learners’ attention levels were the only mediator in the relationship between the PLQ + MEQ strategy and learning performance, while playing a suppressive role in the relationship between the PLQ + PAQ strategy and learning performance. The PLQ + MEQ strategy increased students’ likelihood of improving their learning performance by increasing their attention levels. By contrast, while the PLQ + PAQ strategy had a negative effect on learning performance, attention level suppressed this negative effect. One plausible reason may be that the strategy containing a large number of questions may inadvertently increase cognitive load, which may overwhelm the learner, so PLQ + PAQ strategies hinder rather than promote learning performance. In other words, the higher the degree of the PLQ + PAQ strategy, the higher students’ attention levels, which in turn improves their learning performance. In a similar study to the present study, Kokoç (Citation2021) found that self-regulated attention control mediated the negative relationship between social media multitasking and learning performance. Thus, attention levels may play a critical role in decreasing the negative effects of the PLQ + PAQ strategy on learning performance. Furthermore, for both questioning strategies, this study can draw consistent conclusions that questioning strategies are positive predictors of the attention level, and that attention level is a positive predictor of learning performance. This can motivate teachers to consider incorporating PLQ + MEQ and PLQ + PAQ strategies into online videos to promote better learning performance by enhancing students’ attention levels.

5. Conclusion

This study contributes valuable information to the literature on questioning strategies in online videos and offers useful guidelines for designing online videos. The study showed that questioning strategy was crucial for enhancing the effectiveness of online video teaching and for fostering meaningful interactions between learners and teachers. The main findings were that embedding questions in online videos increased learners’ attention levels and that using two questioning strategies together had a greater impact on learners’ attention levels than using one strategy alone. The MEQ strategy helped maintain learners’ attention levels throughout the learning process. Moreover, attention level mediated the effect of the PLQ + MEQ strategy on learning performance, but suppressed the effect of the PLQ + PAQ strategy on learning performance. These results reveal a complex relationship between questioning strategies in online videos and learners’ attention levels and learning performance.

Regarding the limitations of this study, the lack of differentiation in question types (questions without feedback in the PLQ strategy and questions with feedback in other strategies) may have led to inconsistent influences on students when they were exposed to different questioning strategies. For instance, Law and Chen (Citation2016) found that question prompts and feedback had an impact on students’ learning. Second, the topic of the online videos in the study was teaching methods, which was created for a specialised course on educational technology, implying that the results might not apply to all online video content. In particular, because the video content was based on declarative knowledge, it was not possible to determine whether the findings were equally applicable to content related to procedural knowledge. Further studies are required to examine whether our results can be generalised to other topics. Third, this study had a small sample size, which may have led to power limitations. Fourth, the current study did not control for variables such as gender, prior knowledge levels and prior interest, which may have influenced the outcomes. Artino (Citation2008) demonstrated that experienced students tend to perform better than novice students in terms of learning processes and outcomes. Future experiments will incorporate control measures for these influential variables to enhance the robustness and generalizability of the findings.

Our results can serve as a reference for designing online videos. Based on these findings, the first practical implication of designing online videos is that teachers should be encouraged to design questions for online videos at a given teaching stage according to their teaching objectives. This means that the questioning strategy of online videos needs to be designed meaningfully, otherwise its positive impact on attention levels may be weakened. For example, if teachers want to capture learners’ attention at the beginning of the video, lead-in questions should be included before the instructional content, whereas if they want to assess learners’ learning performance, they can include assessment questions at the end of the video. Meanwhile, if teachers want learners to maintain a constant level of attention during the learning process, they need to include questions throughout the video or combine multiple questioning strategies, rather than only asking questions at the end or beginning of the video. Finally, the adverse effects of cognitive load should be considered when exploring the combined application of multiple questioning strategies because embedding too many questions in online videos can also cause cognitive overload for learners.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the National Social Science Fund of China.

Notes on contributors

Qingchao Ke

Qingchao Ke is a professor in the school of Information Technology in Education at South China Normal University. His research interests include technology enhanced learning and education digitalization.

Tingting Bao

Tingting Bao is a doctoral candidate of South China Normal University. Her research interests include technology enhanced learning and AI in Education.

Jieni Zhu

Jieni Zhu is a postgraduate student of South China Normal University. Her research interests include MOOC and instructional design.

Xiufang Ma

Xiufang Ma is an Associate professor in the school of Information Technology in Education at South China Normal University. Her research interests include instructional design and AI curriculum in K12.

References

  • Artino, A. R. JR (2008). Cognitive load theory and the role of learner experience: An abbreviated review for educational practitioners. AACE Review, 16(4), 425–439.
  • Azlan, C. A., Wong, J. H. D., Tan, L. K., A D Huri, M. S. N., Ung, N. M., Pallath, V., Tan, C. P. L., Yeong, C. H., & Ng, K. H. (2020). Teaching and learning of postgraduate medical physics using Internet-based e-learning during the COVID-19 pandemic – A case study from Malaysia. Physica Medica, 80, 10–16. https://doi.org/10.1016/j.ejmp.2020.10.002
  • Baddeley, A. D. (1992). Working memory. Science, 255(5044), 556–559. https://doi.org/10.1126/science.1736359
  • Bashir, K., Rauf, L., Yousuf, A., Anjum, S., Bashir, M. T., & Elmoheen, A. (2021). Teaching benign paroxysmal positional vertigo to emergency medicine residents by using Gagne’s nine steps of instructional design. Advances in Medical Education and Practice, 12, 1223–1227. https://doi.org/10.2147/AMEP.S309001
  • Bevilacqua, D., Davidesco, I., Wan, L., Chaloner, K., Rowland, J., Ding, M., Poeppel, D., & Dikker, S. (2019). Brain-to-brain synchrony and learning outcomes vary by student–teacher dynamics: Evidence from a real-world classroom electroencephalography study. Journal of Cognitive Neuroscience, 31(3), 401–411. https://doi.org/10.1162/jocn_a_01274
  • Brualdi Timmins, A. C. (1998). Classroom questions. Practical Assessment, Research, and Evaluation, 6(1), 6. https://doi.org/10.7275/05rc-jd18
  • Byun, H., Lee, J., & Cerreto, F. A. (2014). Relative effects of three questioning strategies in ill-structured, small group problem solving. Instructional Science, 42(2), 229–250. https://doi.org/10.1007/s11251-013-9278-1
  • Cain, J., Black, E. P., & Rohr, J. (2009). An audience response system strategy to improve student motivation, attention, and feedback. American Journal of Pharmaceutical Education, 73(2), 21. https://doi.org/10.5688/aj730221
  • Chen, C. H., & Yeh, H. C. (2019). Effects of integrating a questioning strategy with game-based learning on students’ language learning performances in flipped classrooms. Technology, Pedagogy and Education, 28(3), 347–361. https://doi.org/10.1080/1475939X.2019.1618901
  • Chen, L., Liu, L., Lin, Y., & Zhang, L. (2021). Research on cognitive load of graphic processing in virtual reality learning environment. Psychological Development and Education, 37(5), 619–627. https://doi.org/10.16187/j.cnki.issn1001-4918.2021.05.02
  • Chen, C. M., & Wang, J. Y. (2018). Effects of online synchronous instruction with an attention monitoring and alarm mechanism on sustained attention and learning performance. Interactive Learning Environments, 26(4), 427–443. https://doi.org/10.1080/10494820.2017.1341938
  • Deng, R., Feng, S., & Shen, S. (2023). Improving the effectiveness of video-based flipped classrooms with question-embedding. Education and Information Technologies, 2023, 5. https://doi.org/10.1007/s10639-023-12303-5
  • Deng, R., & Gao, Y. (2023). Effects of embedded questions in pre-class videos on learner perceptions, video engagement, and learning performance in flipped classrooms. Active Learning in Higher Education, 2023, 146978742311670. https://doi.org/10.1177/14697874231167098
  • Feng, Z. (2013). Using teacher questions to enhance EFL students’ critical thinking ability. Journal of Curriculum and Teaching, 2(2), 147–153. https://doi.org/10.5430/jct.v2n2p147
  • Gagne, R. M. (1970). The conditions of learning (2nd ed.). Holt, Rinehart & Winston.
  • Haagsman, M. E., Scager, K., Boonstra, J., & Koster, M. C. (2020). Pop-up questions within educational videos: Effects on students’ learning. Journal of Science Education and Technology, 29(6), 713–724. https://doi.org/10.1007/s10956-020-09847-3
  • Hoyle, R. H., & Kenny, D. A. (1999). Statistical power and tests of mediation. In R. H. Hoyle (Ed.), Statistical strategies for small sample research (pp. 195–222). SAGE.
  • Hwang, G. J., Zou, D., & Lin, J. (2020). Effects of a multi-level concept mapping-based question-posing approach on students’ ubiquitous learning performance and perceptions. Computers & Education, 149, 103815. https://doi.org/10.1016/j.compedu.2020.103815
  • Jung, Y., & Lee, J. (2018). Learning engagement and persistence in massive open online courses (MOOCS). Computers & Education, 122, 9–22. https://doi.org/10.1016/j.compedu.2018.02.013
  • Kokoç, M. (2021). The mediating role of attention control in the link between multitasking with social media and academic performances among adolescents. Scandinavian Journal of Psychology, 62(4), 493–501. https://doi.org/10.1111/sjop.12731
  • Kokoç, M., IIgaz, H., & Altun, A. (2020). Effects of sustained attention and video lecture types on learning performances. Educational Technology Research and Development, 68(6), 3015–3039. https://doi.org/10.1007/s11423-020-09829-7
  • Kolås, L. (2015, June). Application of interactive videos in education. In 2015 International Conference on Information Technology Based Higher Education and Training (ITHET) (pp. 1–6). IEEE. https://doi.org/10.1109/ITHET.2015.7218037
  • Kolås, L., Nordseth, H., & Hoem, J. (2016, September). Interactive modules in a MOOC. In 2016 15th International Conference on Information Technology Based Higher Education and Training (ITHET) (pp. 1–8). IEEE. https://doi.org/10.1109/ITHET.2016.7760707
  • Kuo, Y. C., Chu, H. C., & Tsai, M. C. (2017). Effects of an integrated physiological signal-based attention-promoting and English listening system on students’ learning performance and behavioral patterns. Computers in Human Behavior, 75, 218–227. https://doi.org/10.1016/j.chb.2017.05.017
  • Law, V., & Chen, C. H. (2016). Promoting science learning in game-based learning with question prompts and feedback. Computers & Education, 103, 134–143. https://doi.org/10.1016/j.compedu.2016.10.005
  • Leisner, D., Zahn, C., Ruf, A., & Cattaneo, A. (2020, July). Different ways of interacting with videos during learning in secondary physics lessons. In International Conference on Human-Computer Interaction (pp. 284–291). Springer, Cham. https://doi.org/10.1007/978-3-030-50729-9_40
  • Leszczyński, P., Charuta, A., Łaziuk, B., Gałązkowski, R., Wejnarski, A., Roszak, M., & Kołodziejczak, B. (2018). Multimedia and interactivity in distance learning of resuscitation guidelines: A randomised controlled trial. Interactive Learning Environments, 26(2), 151–162. https://doi.org/10.1080/10494820.2017.1337035
  • MacKinnon, D. P., Krull, J. L., & Lockwood, C. M. (2000). Equivalence of the mediation, confounding and suppression effect. Prevention Science, 1(4), 173–181. https://doi.org/10.1023/a:1026595011371
  • Mirriahi, N., Jovanović, J., Lim, L. A., & Lodge, J. M. (2021). Two sides of the same coin: Video annotations and in-video questions for active learning. Educational Technology Research and Development, 69(5), 2571–2588. https://doi.org/10.1007/s11423-021-10041-4
  • Mitchell, M. W. (1997). The effects of embedded question type and locus of control on processing depth, knowledge gain, and attitude change in a computer-based interactive video environment [Doctoral dissertation]. Virginia Polytechnic Institute and State University).http://hdl.handle.net/10919/30299.
  • Polat, H., & Taslibeyaz, E. (2023). Examining interactive videos in an online flipped course context. Education and Information Technologies, 2023, 1. https://doi.org/10.1007/s10639-023-12048-1
  • Poulsen, A. T., Kamronn, S., Dmochowski, J., Parra, L. C., & Hansen, L. K. (2017). EEG in the classroom: Synchronised neural recordings during video presentation. Scientific Reports, 7(1), 43916. https://doi.org/10.1038/srep43916
  • Pratama, W. (2019). An analysis of teacher’s questioning strategies in teaching english at the tenth grade of sman 1 sambit [Doctoral dissertation]. State Institute of Islamic Studies of Ponorogo (IAIN PONOROGO). https://etheses.iainponorogo.ac.id/8255/1/upload%20e.theses%20fix%208%20november.pdf.
  • Rice, P., Beeson, P., & Blackmore-Wright, J. (2019). Evaluating the impact of a quiz question within an educational video. TechTrends, 63(5), 522–532. https://doi.org/10.1007/s11528-019-00374-6
  • Samaila, K., & Al-Samarraie, H. (2023). Reinventing teaching pedagogy: The benefits of quiz-enhanced flipped classroom model on students’ learning outcomes and engagement. Journal of Applied Research in Higher Education, 2023, 173. https://doi.org/10.1108/JARHE-04-2023-0173
  • Shelton, C. C., Warren, A. E., & Archambault, L. M. (2016). Exploring the use of interactive digital storytelling video: Promoting student engagement and learning in a university hybrid course. TechTrends, 60(5), 465–474. https://doi.org/10.1007/s11528-016-0082-z
  • Sweller, J. (1994). Cognitive load theory, learning difficulty, and instructional design. Learning and Instruction, 4(4), 295–312. https://doi.org/10.1016/0959-4752(94)90003-5
  • Szpunar, K. K., Khan, N. Y., & Schacter, D. L. (2013). Interpolated memory tests reduce mind wandering and improve learning of online lectures. Proceedings of the National Academy of Sciences of the United States of America, 110(16), 6313–6317. https://doi.org/10.1073/pnas.1221764110
  • Tang, T., Abuhmaid, A. M., Olaimat, M., Oudat, D. M., Aldhaeebi, M., & Bamanger, E. (2020). Efficiency of flipped classroom with online-based teaching under COVID-19. Interactive Learning Environments, 31(2), 1077–1088. https://doi.org/10.1080/10494820.2020.1817761
  • van der Meij, H., & Bӧckmann, L. (2021). Effects of embedded questions in recorded lectures. Journal of Computing in Higher Education, 33(1), 235–254. https://doi.org/10.1007/s12528-020-09263-x
  • Wachtler, J., & Ebner, M. (2015). Impacts of interactions in learning-videos: A subjective and objective analysis. In Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2015 (pp. 1205–1213). AACE. https://www.researchgate.net/publication/279912372_Impacts_of_Interactions_in_Learning-Videos_A_Subjective_and_Objective_Analysis
  • Wilson, K., & Korn, J. H. (2007). Attention during lectures: Beyond ten minutes. Teaching of Psychology, 34(2), 85–89. https://doi.org/10.1080/00986280701291291
  • Wu, J. Y. (2017). The indirect relationship of media multitasking self-efficacy on learning performance within the personal learning environment: Implications from the mechanism of perceived attention problems and self-regulation strategies. Computers & Education, 106, 56–72. https://doi.org/10.1016/j.compedu.2016.10.010