3,736
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Making educational videos more engaging and enjoyable for all ages: an exploratory study on the influence of embedded questions

ORCID Icon, ORCID Icon & ORCID Icon
Pages 283-297 | Received 20 Oct 2022, Accepted 23 Mar 2023, Published online: 22 Apr 2023

ABSTRACT

There is Individual variation in how people interact with videos presented in online distance education. Educational videos can be embedded with interactive content to increase engagement and make cognition more efficient. Accordingly, we predicted that embedding questions during videos (rather than after) would enhance the performance of question-answering and be preferred by students. We also hypothesised that the benefits of presenting questions during videos might increase with age. Using a counter-balanced within-subject design, each participant watched short videos with questions embedded either during the video or presented after the video, and we then surveyed their experiences. Although there were no differences in correct responses, participants answered questions posed during videos more efficiently than questions presented after. Females enjoyed questions during videos more than males. Younger individuals (e.g. 25–34) seemed to benefit more from questions during videos than slightly older students (35–44). Interestingly, with increasing age (from 25 to 74), there was a shift in preference towards answering questions after, rather than during, videos. Overall, embedding questions was an effective and well-liked method for enhancing the interactivity of module-related videos. The age of students should be considered when embedding questions.

1. Introduction

Educational videos are often vital resources for supporting instruction in virtual learning environments (VLEs). The use of VLE videos has increased during the COVID-19 pandemic, as many education providers have adopted distance learning to deliver tuition (Dedeilia et al., Citation2020; Iwanaga et al., Citation2021; Pokhrel & Chhetri, Citation2021; World Health Organization, Citation2021). Importantly, distance learners are a heterogeneous group, with a higher representation of older, working, and female individuals than non-distance learners (Chen et al., Citation2019; Latanich et al., Citation2001). Therefore, to improve the development of VLE-based educational videos, we need to understand individual variations in the benefits of video interactivity and people’s preferences; we take an experimental approach to investigate such issues.

While many VLE videos are recorded lectures (Cummins et al., Citation2016; Lavigne & Risko, Citation2018; Phillips et al., Citation2016; Szpunar et al., Citation2014; van der Meij & Bӧckmann, Citation2021), videos can also present virtual experiential learning opportunities, including lab or patient demonstrations, case-study interviews, and descriptions of structures or mechanisms that are difficult to verbalise in traditional lectures (Arguel & Jamet, Citation2009; Kestin & Miller, Citation2022; Mayer & Chandler, Citation2001; Mayer et al., Citation2003; Torres et al., Citation2022). According to the Cognitive Theory of Multimedia Learning, such narrative videos can help students ‘build a mental model of a cause-and-effect system’ that reduces cognitive load and subsequently improves their learning (Mayer & Chandler, Citation2001, p. 396). While this is helpful when guiding learners to acquire knowledge (Noetel et al., Citation2021), it is critical not to overload students with complex information presented in videos. To further reduce cognitive load, videos can be segmented into smaller chunks, and signals can be used to help focus a student’s attention (Ibrahim, Citation2012). Accordingly, the present experiments explore how to improve learning efficiency associated with VLE-based demonstration videos by dividing them into sections with embedded questions. We investigated performance on Question Embedded Videos (QEVs) and students’ preferences for including QEVs in online modules. We were particularly interested in how individuals vary in their experience with QEVs, and thus we assessed differences across gender identities and age ranges.

The use of ‘pop-up’ questions within narrative demonstration videos aligns well with ‘flipped classroom’ examples of education (Chouhan, Citation2021; Haagsman et al., Citation2020; Lo & Hew, Citation2017; O’flaherty & Phillips, Citation2015) and is supported by constructivist models of learning (Vural, Citation2013). Accordingly, engagement with the narrative videos is self-paced, and students are forced to discover information independently; instructors don’t deliver lessons but are instead mediators of knowledge. Students can connect video content with their previous experiences; this personalisation of understanding can help reduce cognitive load (Torres et al., Citation2022) and may be especially useful for experience-rich adult learners (Khalil & Elkhider, Citation2016). Cognitive load may be reduced because students employ a viewing strategy, such as connecting with experiences or finding ways to optimally answer questions (Costley et al., Citation2020).

The reduction in cognitive load afforded by QEVs might help motivate students to engage with module materials and improve student satisfaction (Haagsman et al., Citation2020; van der Meij & Bӧckmann, Citation2021; Vural, Citation2013). Furthermore, using test-like questions, as opposed to other forms of interactivity, within videos likely enhances student performance on later exams (Cummins et al., Citation2016; Kestin & Miller, Citation2022; Szpunar et al., Citation2013, Citation2014; Torres et al., Citation2022; Yang & Xie, Citation2021). It has long been known that guiding students through videos with paper-based questions can improve student module performance (Lawson et al., Citation2006). Taking tests (e.g. learning to retain and retrieve knowledge during videos with questions) may promote test-taking ability on later assessments (Jacoby et al., Citation2010). Alternatively, there may be an indirect relationship between improved module performance and earlier experience with QEVs; such videos may improve focus, encourage note-taking, and enhance the amount of time spent studying (Haagsman et al., Citation2020; Lawson et al., Citation2006; Szpunar et al., Citation2013; Vural, Citation2013). This manuscript examines how the placement of questions within narrative (non-lecture) videos impacts performance and satisfaction.

Most research on the efficacy and experience of QEVs in classes studies typical undergraduate-age students (e.g. around 21 years of age; Cummins et al., Citation2016; Haagsman et al., Citation2020; Torres et al., Citation2022; van der Meij & Bӧckmann, Citation2021). Lifelong learning can have many benefits on the health and well-being of individuals, with advantages ranging from decreasing depression and enhancing social satisfaction (Laal & Salamati, Citation2012; Narushima, Citation2008; Weinstein, Citation2004). Older students may prefer to learn via video (e.g. ages>41; Simonds & Brock, Citation2014), possibly because they learn at their own pace and desire access to audio-visual materials (Heaggans, Citation2012; Weinstein, Citation2004). Such preferences may be related to the increased cognitive effort required for tasks as we age (Hess & Ennis, Citation2012), and videos may reduce this effort. Our study hypothesises that by reducing cognitive load (Cummins et al., Citation2016; Torres et al., Citation2022), QEVs may help older participants learn and enhance their viewing experience.

Adults may be primarily motivated to study because an employment requirement for a university degree is becoming the norm. Therefore, additional education may be required for individuals who have decided to re-enter the workforce (Osam et al., Citation2017). Despite this, adult education can be challenging to deliver, as student retention levels may need to be improved when individuals need to balance work/life commitments with their coursework (Renner & Skursha, Citation2022). Therefore, education for adults should be governed by principles of humanism, whereby individuals have control over their personal growth and are responsible for their independent learning (Arghode et al., Citation2017). More specifically, andragogy is a humanistic approach to studying adult learning, theorising that prior experiences contribute significantly to adult learning and that adults are highly self-directed in their education (Arghode et al., Citation2017; Khalil & Elkhider, Citation2016). Accordingly, QEVs may be beneficial andragogic approaches, as they promote asynchronous, flexible and self-paced learning. However, focussing on techniques associated with andragogy may be inadequate, as they often overlook how learning strategies previously used by the individual may contribute to (or hinder) their adult learning, and thus may not fully acknowledge individual variation in preferred learning methods (Arghode et al., Citation2017).

Studies of ‘flipped classroom’ or online-distance education often do not detect differences in performance between people identifying as male or female (Chen et al., Citation2019; Yu, Citation2021). Despite this, females may be more engaged in their education, while males may have a more stable positive perception of online learning (Nistor, Citation2013; Richardson & Woodley, Citation2003). While there is significant individual variation, it has been suggested that enhanced female engagement and confidence during online learning, relative to males, may be related to their ability to multi-task and resist distractions (Alghamdi et al., Citation2020; Price, Citation2006; Stoet et al., Citation2013). In part, enhanced engagement with online education for females may be facilitated by interactions with tutors (Price, Citation2006) and the desire to access supplemental online resources (possibly including videos; Martin & Bolliger, Citation2018). Thus, while we anticipate not detecting differences in question-answering performance between males and females, we may observe that females show an enhanced preference for engagement opportunities, such as QEVs, in online education.

The present work investigates whether splitting online videos into multiple interactive segments impacts question-answering performance (e.g. correctness, question-answering speed), as well as increases engagement and satisfaction. Accordingly, we inserted questions into existing online videos, measured performance on these prompts, and identified how students felt about the experience. Notably, the videos presented to participants were already produced, and questions were inserted using commercially available software. Therefore, if adopted for teaching, this approach requires minimal effort from the instructor, primarily requiring the creation of formative or summative test questions to be inserted into existing videos. Accordingly, we aimed to develop a replicable strategy for improving learning from videos and, at a minimum, to create a more engaging experience for asynchronous learning (particularly for distance learners). Crucially, we analysed data across age ranges (25–74 years old) and gender identities; such detailed analyses can provide a more nuanced view of how the delivery of video-based educational material can be optimised to individual needs.

2. Materials & methods

2.1. Participants

Participants (n = 158; ranging from 18–74 years of age) were selected from the Curriculum Design Study Panel (CDSP) at The Open University. We compared question performance and survey responses between individuals aged 25–34 (n = 14), 35–44 (n = 17), 45–54 (n = 35), 55–64 (n = 33), and 65–74 (n = 15); these are a more comprehensive range of ages compared to other studies, highlighting a novel aspect of our dataset (Cummins et al., Citation2016; Haagsman et al., Citation2020; Torres et al., Citation2022; van der Meij & Bӧckmann, Citation2021). We excluded 44 cases because they were incomplete or contained extreme outliers, leaving 114 (74 identified as female, 40 identified as male) complete cases for analysis. Importantly, it remains problematic that most pedagogical literature focuses on comparing people who identify as male or female (Paechter et al., Citation2021); no participants reported identifying as non-binary in our random sample.

This study panel is voluntary, comprises current students at The Open University, and is refreshed annually. Consent is gained through self-selection for the study panel. The experiments and questions were approved by the University Human Research Ethics Committee (The Open University) and by the CDSP. Participants were randomly assigned to different groups; therefore, we can assume equal variation in background knowledge and experience between the designated groups. The experiment was completed by the participants remotely in their own homes. The study was anonymous; therefore, participants could not contact one another for answers to questions. A within-subjects design was used, and each participant watched three videos. These videos had questions embedded during playback or questions placed after the video. Each participant experienced the same set of questions, regardless of their location.

2.2. Videos and embedded questions

Questions were specific to video content, so internet searches could not easily be used to find answers. The videos and embedded questions were already used as text-based online module material at The Open University. Individuals were excluded from participating if they were currently or previously enrolled in the modules from which the videos were taken. Since the videos were specially produced for the modules, students would not have previously viewed them.

All experiments were conducted via The Open University VLE and PlayPosit, a video platform that allows adding interactive features to standard video and provides analytics. The video files were hosted in PlayPosit, ensuring consistent question delivery across the different groups and experiments (Supplement A, Figure S1). PlayPosit served as a ‘wrapper’ for the videos, allowing the questions to be seamlessly embedded during or after the video. As shown in the figure, participants answered questions by clicking a radial button and hitting ‘submit’ (participants were familiar with this procedure via Open University modules and did not require training). Immediate feedback was provided (green highlights for correct answers and red for incorrect).

Three videos on blindness, malaria, and first responders, were used in all experiments and were part of the University curriculum. The chosen videos had to meet four criteria: (1) The videos had to be already in presentation and used in current modules at The Open University to increase the experiment’s external validity and avoid the added workload of creating new novel resources. (2) The videos selected should not have been previously viewed by the selected cohort of participants. (3) The selected videos had to be understandable by a lay audience. (4) The videos had to differ in duration. While each video was less than 5 minutes long, viewing the videos while answering questions lasted approximately 20–30 minutes (completed in one sitting).

Questions were based on the retention of the information presented in the videos, and each video had three embedded questions. An example question would be ‘Which of the following is a symptom of malaria?’, with potential multiple-choice answers including ‘Low body temperature’, ‘Severe headaches’, ‘Constipation’, and ‘Excessive energy’ (see Supplement B for a complete list of questions). For QEVs presented during videos, questions occurred approximately 1 minute after giving relevant information. We did not create the multiple-choice questions for the videos; questions were designed according to Open University guidance and were already displayed adjacent to the associated videos (e.g. click to unhide an answer) on module websites. After finishing the QEVs, participants completed a questionnaire and were allowed to engage with an online feedback forum.

2.3. Survey & forum

After the QEVs, participants were given a survey (another 20 minutes). The survey was delivered using the Questionnaire tool of The Open University Moodle VLE (see Supplement C for questions). Briefly, survey questions first recorded demographics and previous experience with online learning and video viewing habits during modules (including whether they had previously encountered QEVs; most hadn’t, so previous QEV experience wasn’t factored into the analysis). We then assessed how they interacted with the QEVs we presented, including whether they re-watched portions of the video (i.e. used the ‘rewind’ function), took notes, enabled captions, or changed the speed of the video. We did not control these features because we wished to assess more ‘natural’ student behaviour (i.e. how they would typically behave if they encountered videos in modules). However, we found that there was primarily individual variation in notetaking, so this was the only watching-related behaviour we assessed. Finally, we measured participant opinions and satisfaction with the QEV experience. We evaluated, for example, whether participants enjoyed answering questions in videos and how motivated they were to perform well on the task. We also asked participants whether they preferred questions located during or after videos. Finally, CDSP participants could interact on a forum after completing the study to provide comments on the QEVs. We did not have enough comments on this forum to conduct a detailed qualitative analysis (this was not the study’s primary purpose); instead, we highlighted a few remarks in the results section of this manuscript.

2.4. Data analysis

Data were normalised using log transformation to analyse QEV performance and time to answer. These dependent variables were assessed using two-way ANOVAs (separate for males and females), with responses to questions posed during or after videos as the within factor and age group as the between-factor. Where appropriate, post-hoc Šídák multiple comparison tests followed the ANOVAs.

Wilcoxon Signed-Rank tests were used to analyse survey-based data to compare within-subject opinions on questions asked during versus after videos. Responses to individual survey questions were analysed using Mann-Whitney U tests (males vs females) or Kruskal-Wallis H tests (across age groups, followed by Dunn multiple comparison post hoc tests).

3. Results

3.1. Descriptive statistics and previous experience

Because all participants were students at The Open University (a distance-learning institution), everyone had reported previous experiences watching non-lecture videos to support module content. Students typically watched every video available during modules (56.14% watched all videos, 25.44% most, 4.39% half, and 14.04% few). Most students (84.21%) watched each module video once, whereas only 3.51% reported watching most videos more than once. While students could download module videos for offline viewing, most individuals preferred only to watch videos online (50.88%), and a minority of participants liked watching videos offline (4.39%). Interestingly, many students reported variation in their behaviour, with a substantial number watching module videos both online or offline (21.05%), having no preference (20.18%), or responding that their viewing behaviour ‘depends’ on the situation (2.63%). Critically relevant to our present study, most participants had never before experienced QEVs in their coursework or elsewhere (82.20%).

We needed to assess whether students downloaded module videos for offline viewing; if students preferred watching videos offline, they might not have access to interactive content within videos in our VLE. In our survey, we received some written responses (18) regarding why individuals might choose to download video content. Most of these responses concerned students planning where they would study and whether they would have reliable internet access to watch videos. Participants explained that they were more likely to download videos for later viewing if their internet access was poor or if they were travelling. Thus, when teaching, it is essential to consider whether students can access online QEVs and if poor internet access might disadvantage some students if QEVs are an examinable component of classwork.

3.2. Question-answering performance

We were interested in whether individual participants performed differently on questions asked during versus after videos. Our within-subject analysis found no significant difference in the average scores for questions posed during or after videos. There were also no differences between age groups and gender identity in scores. Perhaps unsurprisingly, participants who reported taking notes (n = 23) during the videos scored better than those who did not (n = 91). This effect was significant for answering questions during videos (U = 713.5, p = 0.005; Scores for note-takers 97.8 ± 1.58% vs non-note-takers, 87.6 ± 1.81%) and displayed a strong trend for questions asked after videos (U = 806.5, p = 0.057; Scores for note-takers 94.2 ± 2.25%, vs non-note-takers 85.8 ± 2.02%). Because of these high scores, most participants appeared to retain the information presented in the videos. Therefore, we were interested in whether patterns of video engagement (e.g. speed of answering) differed across individuals and according to where questions were placed (i.e. during or after the video).

Participants answered questions posed during videos significantly faster than those presented after watching (; Females, F(1,69) = 38.34, p < 0.0001; Males, F(1,35) = 26.59, p < 0.0001; Separate 2-way ANOVAs for males and females, with age as the between factor and question position as the within factor). Post-hoc Šídák multiple comparison tests found significant differences in question-answering speed (during vs after) for ages 25–34 (p = 0.0069), 45–54 (p = 0.0002), and 55–64 (p = 0.0001) in females, and for ages 35–44 (p = 0.017), 45–54 (p = 0.030), and 55–64 (p = 0.010) in males. Thus, for both males and females, 65–74 year-olds were the only age range where question-answering speed did not differ based on question location (during or after videos). Finally, we did not detect differences between males and females in question-answering speed for questions positioned during or after videos. Note-taking did not significantly impact question-answering time.

Figure 1. Speed of answering questions: both females (a; n= 74) and males (b; n=40) answered questions faster when they were positioned during videos than after videos (****,p<0.0001). The speed of answering questions was normalised vis log transformation. Bars represent age ranges (±SEM) and individual data points (grey circles) are shown.

Figure 1. Speed of answering questions: both females (a; n= 74) and males (b; n=40) answered questions faster when they were positioned during videos than after videos (****,p<0.0001). The speed of answering questions was normalised vis log transformation. Bars represent age ranges (±SEM) and individual data points (grey circles) are shown.

3.3. Opinions on QEVs

Next, we surveyed participants’ opinions on the experience of answering questions posed during or after videos. Participants responded favourably to answering questions about video content, regardless of when questions were presented (). Our within-subject analysis found that participants enjoyed answering questions during videos more than after (Wilcoxon, Z = −2.21, p < 0.026). This was supported by qualitative feedback we received on QEVs, including that they ‘liked and felt comfortable with the format’, ‘thought the design was brilliant and tailored to the user’s experience’, believed that ‘answering the questions helped to reinforce what I had learned’, and that thought that ‘giving questions intermittently does help in learning and assimilating information’.

Table 1. Overall comparison of attitudes to questions asked during vs after videos (ages and gender identities combined). Results show averages (out of 10; ±SEM) and Wilcoxon test statistics (Z & p). n = 114; *, p < 0.05.

According to separate analyses across gender identifications, females enjoyed answering questions during videos more than males ( and ; U = 1117.5, p = 0.027; Females, 8.19/10 ± 0.27; Males, 7.25/10 ± 0.41). Compared to males, females also thought that questions presented during videos helped them to understand video content (U = 1082, p = 0.015; Females, 7.89/10 ± 0.30; Males, 6.63/10 ± 0.47). There were no other differences in opinions between males and females regarding questions posed during or after videos. Interestingly, these results aligned with our finding that, when taking classes, females tended to engage with videos more than males by re-watching videos (U = 1163, p = 0.007; although, both males and females are equally likely to watch most videos at least once, U = 1449, p = 0.84; there were no significant differences in how students watched videos during modules across age ranges).

Figure 2. Comparison of attitudes to questions asked during vs after videos between males (n = 40) and females (n = 74). Statistics are presented in . Survey responses are scored out 10. Females are shown in white bars and males in black (±SEM). Grey circles represent individual data points. *, p < 0.05.

Figure 2. Comparison of attitudes to questions asked during vs after videos between males (n = 40) and females (n = 74). Statistics are presented in Table 2. Survey responses are scored out 10. Females are shown in white bars and males in black (±SEM). Grey circles represent individual data points. *, p < 0.05.

Table 2. Comparison of attitudes to questions asked during vs after videos between males (n = 40) and females (n = 74). Results are also presented in . Mann-Whitney test statistics are shown in this table (U & p). *, p < 0.05.

We also compared survey response regarding the experience of questions during or after videos across age ranges [25–34 (n = 14), 35–44 (n = 17), 45–54 (n = 35), 55–64 (n = 33), and 65–74 (n = 15)]. As summarised in , across age ranges, individuals were generally positive in their opinions about QEVs. Based on post-hoc tests (), participants in the 25–34 and 45–54 age ranges were often most positive about questions posed during videos, whereas those in the 35–44 were likely the least positive. Differences in opinions across ages on questions posed after videos were less clear, but followed a similar pattern of 35–44 year-olds being least enthusiastic. Notably, 65–74 year-olds related to questions posed after videos, which may relate to their overall preference of question position (see below).

Figure 3. Comparison of attitudes to questions asked during vs after videos across ages [25–34 (n=14), 35–44 (n=17), 45–54 (n=35), 55–64 (n=33), and 65–74 (n=15)]. Statistics are presented in . Survey responses are scored out 10. Bars represent average scores (±SEM). Grey circles represent individual data points. *, p<0.05.

Figure 3. Comparison of attitudes to questions asked during vs after videos across ages [25–34 (n=14), 35–44 (n=17), 45–54 (n=35), 55–64 (n=33), and 65–74 (n=15)]. Statistics are presented in Table 3. Survey responses are scored out 10. Bars represent average scores (±SEM). Grey circles represent individual data points. *, p<0.05.

Table 3. Comparison of attitudes to questions asked during vs after videos across age groups [25–34 (n = 14), 35–44 (n = 17), 45–54 (n = 35), 55–64 (n = 33), and 65–74 (n = 15)]. Results are also presented in . Kruskal-Wallis H tests with Dunn multiple-comparison tests (for specific age ranges) are shown in this table. *p < 0.0001–0.05.

Finally, on a single scale, we asked participants to rate whether they preferred questions posed during videos (scored 0/10) to after videos (scored 10/10). There were no differences in preference between males and females (, left; Mann-Whitney, U = 1199, p = 0.11). In contrast, there was a significant effect of age on preference (Kruskal-Wallis H Test, H(4) = 11.01, p = 0.027), with older participants seeming to prefer questions after videos than during videos (although post-hoc tests were not significant). Such a finding for older participants aligns with their higher enjoyment of questions posed after the videos ( and ).

Figure 4. Preference for questions posed during (score of 0/10) or after (score of 10/10) videos. Data compare either males vs females (a) or across ages (b). Bars represent average scores (±SEM). Grey circles represent individual data points. n=114. *, p<0.05.

Figure 4. Preference for questions posed during (score of 0/10) or after (score of 10/10) videos. Data compare either males vs females (a) or across ages (b). Bars represent average scores (±SEM). Grey circles represent individual data points. n=114. *, p<0.05.

4. Discussion

In online distance learners, we examined individual variation in the performance and self-reported benefits of answering questions embedded within educational videos. Participants watched three videos, and questions on video content were asked during or after the videos. While the percentage of correct answers did not differ according to question location, participants were generally quicker at answering questions posed during videos than after. Females enjoyed answering questions posed during videos more than males and felt that answering such questions helped their understanding. Despite this, males and females did not differ in their overall preference for questions being located during or after videos. Younger participants (aged 25–34) felt the most optimistic about the questions posed during the videos. Interestingly, unlike younger participants, older students (65–74) did not differ in the speed at which they answered questions posed during or after videos. In addition, 65–74 year-olds enjoyed answering questions after videos more than other groups, and they generally preferred answering questions after videos than during. Together, while the results illustrate that participants were overwhelmingly positive about QEVs, there were some differences in their experience across genders and ages; these individual differences should be accounted for when developing QEVs for specific student populations.

This study builds upon the emerging literature on how narrative demonstration videos may be valuable additions to an individual’s online education (Chouhan, Citation2021; Haagsman et al., Citation2020; Lo & Hew, Citation2017; O’flaherty & Phillips, Citation2015). Similar to previous research, we found that students overall had a positive experience with QEVs (Haagsman et al., Citation2020). We initially hypothesised that embedding questions during videos might uniformly improve the learning experience, as dividing videos into manageable sections may optimise an individual’s cognitive load (Mayer & Chandler, Citation2001). Although we did not perform psychological tests of cognitive function, we did find that the youngest group of individuals (25–34-year-olds) were often found to enjoy and benefit from questions embedded during videos more than other age groups (especially compared to slightly older, 35–44-year-olds). This contrasts with our expectations that, with increasing age, individuals might find questions embedded during videos more helpful because the videos could help reduce cognitive effort (Laal & Salamati, Citation2012; Narushima, Citation2008; Simonds & Brock, Citation2014; Weinstein, Citation2004) and promote self-paced adult learning (according to principles of andragogy; Arghode et al., Citation2017). It is unclear why the oldest group of students (65–74 year-olds) seemed to prefer questions asked after videos. Still, perhaps it’s related to age-related changes in selective attention and distractibility (Guerreiro et al., Citation2010). In our study, it is possible that questions positioned during videos disrupted concentration for specific individuals, and regaining attention was challenging once the video re-started. Since the videos we presented were relatively short (less than 5 minutes each), perhaps answering three questions within this period was too disruptive. Future research can investigate the impact of longer videos and compare results on video performance to measures of cognitive load and selective attention (e.g. a Stroop Task; Davidson et al., Citation2003). It may also be possible to promote attention/engagement by supplementing the embedded questions with other enhanced visuals; research suggests that combining such features is effective for teaching scientific concepts (although it hasn’t been investigated across age ranges and gender identities; Kestin & Miller, Citation2022).

Several other limitations and considerations for future research should be highlighted. First, while we did not fully control for computer literacy, all participants were enrolled at a distance-learning university and were comfortable with online videos and VLEs. Therefore, individuals are likely to be more efficient at using a computer than members of the general population, especially older students (Laal & Salamati, Citation2012; Narushima, Citation2008; Weinstein, Citation2004). That said, most participants (82.20%) had no prior experience with QEVs, supporting the generalisability of the results. Second, we did not compare genders at varying ages, as our sample size would be too small. Third, while we used a bespoke questionnaire to assess participant satisfaction with QEVs, subsequent studies may expand the survey to evaluate multiple dimensions for each response category. Fourth, our questions tested content retention and required a multiple-choice response. Future research will investigate more ‘formative’ questions that are unmarked and support creative thinking about the video content (Weurlander et al., Citation2012; Yorke, Citation2003). Such questions, for example, may encourage short-answer responses, discussions, or assess prior knowledge; students generally find such questions rewarding and motivational (Schroeder & Dorn, Citation2016; van der Meij & Bӧckmann, Citation2021; Yang & Xie, Citation2021). Finally, future research could integrate QEVs into live modules students take; assessments could then determine if QEVs promote long-term knowledge retention. Integrating QEVs into classwork could help determine whether the benefit of QEVs is due to a direct effect of testing (e.g. enhancing retention on specific topics) or an indirect impact of testing (e.g. increasing engagement with online resources). Similar to Haagsman et al. (Citation2020), our current results suggest that QEVs may have an indirect impact on student performance, as students who voluntarily took notes achieved higher scores on the questions.

Together, our study supports using QEVs in teaching, particularly when used with narrative videos. Although more research is required, placing questions during videos for younger students (e.g. 25–34-year-olds) and after videos for older pupils (e.g. 65–74 year-olds) may be beneficial. Importantly, we re-used videos already produced and inserted questions into them using commercially available software; therefore, we highlight that teacher workload need not drastically increase to create QEVs. Thus, QEVs may be effective and engaging learning resources across ages, and instructors should monitor their use to ensure that individual needs are addressed and learning objectives are achieved.

Data availability

Anonymised data are available upon request by contacting the corresponding author. Surveys used can be found in the Supplemental Materials.

Ethics

The experiments and questions were approved by the Open University Human Research Ethics Committee (HREC/3239/Singer) and by the Curriculum Design Study Panel (CDSP).

Supplemental material

Supplemental Material

Download PDF (249.3 KB)

Acknowledgments

We would like to thank eSTEeM, the Open University centre pedagogy in the Faculty of Science, Technology, Engineering and Mathematics centre for their support of this project. We would also like to thank Sue Germer from PlayPosit, who allowed us to engage with PlayPosit for this experiment and provided guidance.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Supplementary material

Supplemental data for this article can be accessed online at https://doi.org/10.1080/02601370.2023.2196449

Additional information

Funding

The work was supported by the eSTEeM at The Open University.

Notes on contributors

Morgan B. Zolkwer

Morgan B. Zolkwer graduated from the University of Sussex in June 2022 (BSc Hons Psychology). He worked as a placement student and completed his undergraduate dissertation research with Dr Bryan Singer. Morgan starts his Psychology PhD at University College London in October 2022.

Rafael Hidalgo

Rafael Hidalgo holds an MSc in Electronic Engineering and a Media MBA. He joined the Open University (OU) in 2004 as a Media Project Manager and is currently a Senior Learning Designer. Before joining the OU, Rafael had a 15 years long career in the broadcasting industry, in technical and senior management roles. Rafael is currently researching the use of learning analytics as a tool to inform and improve the design of OU modules. He is also working on the use of interactive video in education, particularly in a distance learning environment.

Bryan F. Singer

Dr Bryan F. Singer (PhD, FHEA) is a Lecturer in the School of Psychology at The University of Sussex. He is also the Director of the Sussex Addiction Research & Intervention Centre and the Chair of the Cross School Research Ethics Committee at The University of Sussex. Dr Singer is also an Associate Lecturer at The Open University. Dr Singer’s teaching focuses on the biological psychology of mental health, as well as the science of memory. His research investigates the neural underpinnings of learning, motivation, and addictions. Dr Singer received his PhD from The University of Chicago and a postgraduate certificate in higher education from The University of Sussex.

References

  • Alghamdi, A., Karpinski, A. C., Lepp, A., & Barkley, J. (2020). Online and face-to-face classroom multitasking and academic performance: Moderated mediation with self-efficacy for self-regulated learning and gender. Computers in Human Behavior, 102, 214–222. https://doi.org/10.1016/j.chb.2019.08.018
  • Arghode, V., Brieger, E. W., & McLean, G. N. (2017). Adult learning theories: Implications for online instruction. European Journal of Training and Development, 41(7), 593–609. https://doi.org/10.1108/EJTD-02-2017-0014
  • Arguel, A., & Jamet, E. (2009). Using video and static pictures to improve learning of procedural contents. Computers in Human Behavior, 25(2), 354–359. https://doi.org/10.1016/j.chb.2008.12.014
  • Chen, Y. T., Liou, S., & Chen, L. F. (2019). The relationships among gender, cognitive styles, learning strategies, and learning performance in the flipped classroom. International Journal of Human–Computer Interaction, 35(4–5), 395–403. https://doi.org/10.1080/10447318.2018.1543082
  • Chouhan, R. (2021). Effective interactive video assignments and rewatch analytics for online flipped classrooms. 2021 IEEE 1st International Conference on Advanced Learning Technologies on Education & Research (ICALTER), 1–4. https://doi.org/10.1109/ICALTER54105.2021.9675132
  • Costley, J., Fanguy, M., Lange, C., & Baldwin, M. (2020). The effects of video lecture viewing strategies on cognitive load. Journal of Computing in Higher Education, 33(1), 0123456789. https://doi.org/10.1007/s12528-020-09254-y
  • Cummins, S., Beresford, A. R., & Rice, A. (2016). Investigating engagement with in-video quiz questions in a programming course. IEEE Transactions on Learning Technologies, 9(1), 57–66. https://doi.org/10.1109/TLT.2015.2444374
  • Davidson, D. J., Zacks, R. T., & Williams, C. C. (2003). Stroop interference, practice, and aging. Neuropsychology, Development, and Cognition Section B, Aging, Neuropsychology and Cognition, 10(2), 85–98. https://doi.org/10.1076/anec.10.2.85.14463
  • Dedeilia, A., Sotiropoulos, M. G., Hanrahan, J. G., Janga, D., Dedeilias, P., & Sideris, M. (2020). Medical and surgical education challenges and innovations in the COVID-19 era: A systematic review. Vivo, 34(3 Suppl), 1603–1611. https://doi.org/10.21873/invivo.11950
  • Guerreiro, M. J. S., Murphy, D. R., & Van Gerven, P. W. M. (2010). The role of sensory modality in age-related distraction: A critical review and a renewed view. Psychological Bulletin, 136(6), 975–1022. https://doi.org/10.1037/a0020731
  • Haagsman, M. E., Scager, K., Boonstra, J., & Koster, M. C. (2020). Pop-up questions within educational videos: Effects on students’ learning. Journal of Science Education and Technology, 29(6), 713–724. https://doi.org/10.1007/s10956-020-09847-3
  • Heaggans, R. C. (2012). The 60’s are the new 20’s: Teaching older adults technology. SRATE Journal, 21(2), 1–8. http://files.eric.ed.gov/fulltext/EJ990630.pdf
  • Hess, T. M., & Ennis, G. E. (2012). Age differences in the effort and costs associated with cognitive activity. The Journals of Gerontology Series B, Psychological Sciences and Social Sciences, 67(4), 447–455. https://doi.org/10.1093/geronb/gbr129
  • Ibrahim, M. (2012). Implications of designing instructional video using cognitive theory of multimedia learning. https://www.semanticscholar.org/paper/23f31de553b78ed2628567814e4bc32bdb85882d
  • Iwanaga, J., Loukas, M., Dumont, A. S., & Tubbs, R. S. (2021). A review of anatomy education during and after the COVID-19 pandemic: Revisiting traditional and modern methods to achieve future innovation. Clinical Anatomy, 34(1), 108–114. https://doi.org/10.1002/ca.23655
  • Jacoby, L. L., Wahlheim, C. N., & Coane, J. H. (2010). Test-enhanced learning of natural concepts: Effects on recognition memory, classification, and metacognition. Journal of Experimental Psychology Learning, Memory, and Cognition, 36(6), 1441–1451. https://doi.org/10.1037/a0020636
  • Kestin, G., & Miller, K. (2022). Harnessing active engagement in educational videos: Enhanced visuals and embedded questions. Physical Review Physics Education Research, 18(1), 010148. https://doi.org/10.1103/PhysRevPhysEducRes.18.010148
  • Khalil, M. K., & Elkhider, I. A. (2016). Applying learning theories and instructional design models for effective instruction. Advances in Physiology Education, 40(2), 147–156. https://doi.org/10.1152/advan.00138.2015
  • Laal, M., & Salamati, P. (2012). Lifelong learning; Why do we need it? Procedia - Social and Behavioral Sciences, 31, 399–403. https://doi.org/10.1016/j.sbspro.2011.12.073
  • Latanich, G., Nonis, S. A., & Hudson, G. I. (2001). A profile of today’s distance learners: An investigation of demographic and individual difference variables of distance and non-distance learners. Journal of Marketing for Higher Education, 11(3), 1–16. https://doi.org/10.1300/J050v11n03_01
  • Lavigne, E., & Risko, E. F. (2018). Optimizing the use of interpolated tests: The influence of interpolated test lag. Scholarship of Teaching and Learning in Psychology, 4(4), 211–221. https://doi.org/10.1037/stl0000118
  • Lawson, T. J., Bodle, J. H., Houlette, M. A., & Haubner, R. R. (2006). Guiding questions enhance student learning from educational videos. Teaching of Psychology, 33(1), 31–33. https://doi.org/10.1207/s15328023top3301_7
  • Lo, C. K., & Hew, K. F. (2017). A critical review of flipped classroom challenges in K-12 education: Possible solutions and recommendations for future research. Research and Practice in Technology Enhanced Learning, 12(1), 4. https://doi.org/10.1186/s41039-016-0044-2
  • Martin, F., & Bolliger, D. U. (2018). Engagement matters: Student perceptions on the importance of engagement strategies in the online learning environment. Online Learning, 22(1), 205–222. https://doi.org/10.24059/olj.v22i1.1092
  • Mayer, R. E., & Chandler, P. (2001). When learning is just a click away: Does simple user interaction foster deeper understanding of multimedia messages? Journal of Educational Psychology, 93(2), 390–397. https://doi.org/10.1037//0022-0663.93.2.390
  • Mayer, R. E., Dow, G. T., & Mayer, S. (2003). Multimedia learning in an interactive self-explaining environment: What works in the design of agent-based microworlds? Journal of Educational Psychology, 95(4), 806–812. https://doi.org/10.1037/0022-0663.95.4.806
  • Narushima, M. (2008). More than nickels and dimes: The health benefits of a community‐based lifelong learning programme for older adults. International Journal of Lifelong Education, 27(6), 673–692. https://doi.org/10.1080/02601370802408332
  • Nistor, N. (2013). Stability of attitudes and participation in online university courses: Gender and location effects. Computers & Education, 68, 284–292. https://doi.org/10.1016/j.compedu.2013.05.016
  • Noetel, M., Griffith, S., Delaney, O., Sanders, T., Parker, P., Del Pozo Cruz, B., & Lonsdale, C. (2021). Video improves learning in higher education: A systematic review. Review of Educational Research, 91(2), 204–236. https://doi.org/10.3102/0034654321990713
  • O’flaherty, J., & Phillips, C. (2015). The use of flipped classrooms in higher education: A scoping review. The Internet and Higher Education, 25, 85–95. https://doi.org/10.1016/j.iheduc.2015.02.002
  • Osam, E. K., Bergman, M., & Cumberland, D. M. (2017). An integrative literature review on the barriers impacting adult learners’ return to college. Adult Learning, 28(2), 54–60. https://doi.org/10.1177/1045159516658013
  • Paechter, C., Toft, A., & Carlile, A. (2021). Non-binary young people and schools: Pedagogical insights from a small-scale interview study. Pedagogy, Culture & Society, 29(5), 695–713. https://doi.org/10.1080/14681366.2021.1912160
  • Phillips, J. A., Schumacher, C., & Arif, S. (2016). Time spent, workload, and student and faculty perceptions in a blended learning environment. American Journal of Pharmaceutical Education, 80(6), 102. https://doi.org/10.5688/ajpe806102
  • Pokhrel, S., & Chhetri, R. (2021). A literature review on impact of COVID-19 pandemic on teaching and learning. Higher Education for the Future, 8(1), 133–141. https://doi.org/10.1177/2347631120983481
  • Price, L. (2006). Gender differences and similarities in online courses: Challenging stereotypical views of women. Journal of Computer Assisted Learning, 22(5), 349–359. https://doi.org/10.1111/j.1365-2729.2006.00181.x
  • Renner, B. J., & Skursha, E. (2022). Support for adult students to overcome barriers and improve persistence. Journal of Continuing Higher Education, 1–10. https://doi.org/10.1080/07377363.2022.2065435
  • Richardson, J. T. E., & Woodley, A. (2003). Another look at the role of age, gender and subject as predictors of academic attainment in higher education. Studies in Higher Education, 28(4), 475–493. https://doi.org/10.1080/0307507032000122305
  • Schroeder, L. B., & Dorn, B. (2016). Enabling and integrating online formative assessment in a flipped calculus course. PRIMUS, 26(6), 585–602. https://doi.org/10.1080/10511970.2015.1050619
  • Simonds, T. A., & Brock, B. L. (2014). Relationship between age, experience, and student preference for types of learning activities in online courses. Journal of Educators Online, 11(1), 1. http://files.eric.ed.gov/fulltext/EJ1020106.pdf
  • Stoet, G., O’connor, D. B., Conner, M., & Laws, K. R. (2013). Are women better than men at multi-tasking? BMC Psychology, 1(1), 1–10. https://doi.org/10.1186/2050-7283-1-18
  • Szpunar, K. K., Jing, H. G., & Schacter, D. L. (2014). Overcoming overconfidence in learning from video-recorded lectures: Implications of interpolated testing for online education. Journal of Applied Research in Memory and Cognition, 3(3), 161–164. https://doi.org/10.1016/j.jarmac.2014.02.001
  • Szpunar, K. K., Khan, N. Y., & Schacter, D. L. (2013). Interpolated memory tests reduce mind wandering and improve learning of online lectures. Proceedings of the National Academy of Sciences of the United States of America, 110(16), 6313–6317. https://doi.org/10.1073/pnas.1221764110
  • Torres, D., Pulukuri, S., & Abrams, B. (2022). Embedded questions and targeted feedback transform passive educational videos into effective active learning tools. Journal of Chemical Education, 99(7), 2738–2742. https://doi.org/10.1021/acs.jchemed.2c00342
  • van der Meij, H., & Bӧckmann, L. (2021). Effects of embedded questions in recorded lectures. Journal of Computing in Higher Education, 33(1), 235–254. https://doi.org/10.1007/s12528-020-09263-x
  • Vural, Ö. F. (2013). The impact of a question-embedded video-based learning tool on E-learning. Educational Sciences: Theory & Practice, 13(October 2012), 4–6.
  • Weinstein, L. B. (2004). Lifelong learning benefits older adults. Activities, Adaptation & Aging, 28(4), 1–12. https://doi.org/10.1300/J016v28n04_01
  • Weurlander, M., Söderberg, M., Scheja, M., Hult, H., & Wernerson, A. (2012). Exploring formative assessment as a tool for learning: Students’ experiences of different methods of formative assessment. Assessment & Evaluation in Higher Education, 37(6), 747–760. https://doi.org/10.1080/02602938.2011.572153
  • World Health Organization. (2021, June 14). Considerations for implementing and adjusting public health and social measures in the context of COVID-19: interim guidance. apps.who.int. https://apps.who.int/iris/bitstream/handle/10665/341811/WHO-2019-nCoV-Adjusting-PH-measures-2021.1-chi.pdf
  • Yang, Z., & Xie, P. (2021). Students’ achievement motivation moderates the effects of interpolated pre-questions on attention and learning from video lectures. Learning and Individual Differences. https://www.sciencedirect.com/science/article/pii/S1041608021000923?casa_token=FEbYeLyrXjYAAAAA:j1leAxEarYQUjYb98Sdr2UpKCxN1V1cISZvTQpm9IcKrA7MlbSkSTNQp-yrzhBKh8FYxIteY7FM
  • Yorke, M. (2003). Formative assessment in higher education: Moves towards theory and the enhancement of pedagogic practice. Higher Education, 45(4), 477–501. https://doi.org/10.1023/a:1023967026413
  • Yu, Z. (2021). The effects of gender, educational level, and personality on online learning outcomes during the COVID-19 pandemic. International Journal of Educational Technology in Higher Education, 18(1), 14. https://doi.org/10.1186/s41239-021-00252-3