16,083
Views
64
CrossRef citations to date
0
Altmetric
Articles

A qualitative synthesis of video feedback in higher education

ORCID Icon, ORCID Icon & ORCID Icon
Pages 157-179 | Received 10 May 2017, Accepted 26 Apr 2018, Published online: 19 May 2018

ABSTRACT

While written and audio feedback have been well-examined by researchers, video feedback has received less attention. This review establishes the current state of research into video feedback encompassing three formats: talking head, screencast and combination screencast. Existing research shows that video feedback has a high level of acceptability amongst both staff and students and may help strengthen student-marker relationships; however, the impact of video feedback on student learning outcomes is yet to be determined. In addition, current evidence is drawn largely from small-scale studies and self-reported data susceptible to the novelty effect. While video feedback appears to be a promising alternative to traditional written feedback for its relative relational richness, the medium continues to be primarily used for information transmission rather than dialogue. Further research is needed to establish how the medium of video influences the feedback process, its potential to facilitate dialogue and its effects on student learning.

Introduction

The term ‘feedback’ is commonly used within higher education to refer to the provision of comments on student work and is traditionally intended to provide a corrective function (Boud and Molloy Citation2013). In contrast, Boud and Molloy (Citation2013) contend that feedback’s fundamental purpose should not be to simply provide comments on student work, but to have a positive impact on what students can do. Effective feedback should enable students to actively participate in the process in order to understand the intended goals, self-evaluate their own work in relation to these goals and to develop strategies to reach the goals or to set more challenging ones (Hattie and Timperley Citation2007). Therefore, feedback is more than information; it is a process that involves the student and is forward-looking and action-oriented. However, this educative potential is often underutilised by both students and markers. Research consistently demonstrates high levels of student dissatisfaction with assessment feedback, with students reporting feedback comments to be inconsistent, unhelpful, infrequent, and badly timed (Hounsell Citation2007; Nicol Citation2010). Markers report that their feedback comments may be ignored and, in some cases, not accessed at all (Hounsell Citation2007; Nicol Citation2010), particularly when provided at the end of a unit of study when there is no immediate opportunity to use it (Zimbardi et al. Citation2017).

Alternative conceptualisations to information transmission have emerged as researchers attempt to redress this gap between the potential and actual impacts of feedback. There now exists a rich body of work that adopts sociocultural perspectives on feedback, whereby students’ emerging capacity for meaning-making is influenced by context, interaction and relationships within a learning trajectory (Ajjawi and Boud Citation2017; Esterhazy and Damşa Citation2017; Telio, Regehr, and Ajjawi Citation2016). In other words, feedback is seen as a ‘communicative act and a social process in which power, emotion and discourse impact on how messages are constructed, interpreted and acted upon’ (Ajjawi and Boud Citation2018, 3). Through creating conditions in which feedback dialogue can emerge, purposes of feedback other than correction can be fostered; for instance, promoting self-regulation of learning, inducting students into disciplinary understandings of criteria and standards, and helping them to make judgements about the quality of their work (Ajjawi and Boud Citation2018; Boud and Molloy Citation2013; Esterhazy and Damşa Citation2017). Such purposes fit within a sustainable assessment agenda, enabling students to better address the current task and to meet their own future learning needs (Boud and Soler Citation2016).

It strikes us that video feedback holds potential due to its affordances over written or audio feedback in promoting a social interactional approach. Borup, Graham, and Velasquez (Citation2011) assert that complex and difficult communications are best suited to media rich in verbal and nonverbal cues. While written feedback (for instance, see Dowden et al. Citation2013; Jolly and Boud Citation2013; Vardi Citation2013) and audio feedback (for instance, see Gould and Day Citation2013; Lunt and Curran Citation2010; Voelkel and Mello Citation2014) have been well-examined by researchers, video feedback has received less attention. This paper seeks to review the video feedback research literature in light of current sociocultural conceptions of feedback. Specifically, we explore findings around video feedback and the ways in which such technologies influence how markers in higher education conceptualise and practice feedback and its intended effects on students.

Video feedback definitions

Few authors have provided a definition of video feedback. This lack of an agreed understanding has led to the term being applied to a number of feedback formats which include a moving image, and even to Computer Assisted Learning applications with video cues (Henderson and Phillips Citation2014). The term video feedback has been used by a number of authors to describe screencast feedback (see Mathisen Citation2012; Thompson and Lee Citation2012; Turner and West Citation2013). Screencast feedback is typically comprised of a recording of a marker’s computer screen or designated window (known as a ‘screen capture’ or ‘screencast’) which captures mouse movements, scrolling and typing, along with a simultaneous audio narration (Henderson and Phillips Citation2014; Thompson and Lee Citation2012). Screencast feedback offers students feedback in the form of a moving image, thereby setting it apart from purely audio feedback, but may not include a physical image of the marker as they speak. Such screencasts therefore lack the range of nonverbal cues (for instance, facial expressions and body language) that Borup et al. (Citation2014) identify as constituting the richness of video feedback. Conversely, talking head video feedback occurs when a marker records themselves ‘speaking to the camera about the student’s assessment and then makes the video available to the student’ (Lamey Citation2015, 692). A middle ground between these two video formats is identified by Klappa (Citation2015) and Phillips (in Ross Citation2015), who note that ‘talking head’ or ‘combination’ screencasts enable a small video recording of the marker to be displayed within a screencast, thus providing a physical representation of the marker in concert with the screen capture and audio narration of a traditional screencast. As such, there are three formats of video feedback available to markers in higher education: screencast, combination screencast, and talking head.

We identified one article in which Henderson and Phillips (Citation2015) reviewed the literature on marker-created assessment feedback artefacts (for instance, audio, video and text) and identified guidelines for the creation and structuring of video-based feedback content. We build on this review through a systematic search focused on video feedback, in what is a rapidly-advancing field. We specifically examine the three formats of video feedback identified above, synthesising the evidence and identifying strengths and challenges. We also consider conceptualisations of, and purposes for, utilising video feedback in light of developing sociocultural perspectives, which position feedback as social and situated acts of meaning-making. We then highlight limitations of the existing research, adopting a critical gaze, and make recommendations on future directions for investigation.

Methods

This qualitative synthesis of the literature considers screencast, talking head, and combination screencast feedback, and is drawn primarily from peer-reviewed journals. Following Bearman and Dawson’s definition, qualitative synthesis is ‘any methodology whereby study findings are systematically interpreted through a series of expert judgements to represent the meaning of the collected work’ (Citation2013, 253). This ‘big picture’ approach enabled us to synthesise the diverse perspectives and practices of video feedback found within the literature.

Search strategy

Searches were made using keywords: ‘video feedback’, ‘video marking’, ‘video grading’, ‘screencast feedback’ and ‘asynchronous video feedback’. As video feedback is commonly used across a range of professions – for instance, as a training tool for sportspersons – searches were also made using the keywords ‘video feedback for assessment’ and ‘video comments for assessment’ to narrow the scope of returned results. The following inclusion criteria were applied to ensure the resources met the scope of the review:

  • Published after 2005 – due to the speed of technological advancement over the past decade, evidence from before this date typically relies on outmoded and even obsolete video recording methods (Henderson and Phillips Citation2015);

  • Published in English;

  • Full text available;

  • Peer-reviewed journals and grey literature (e.g. project reports, guides); and

  • Studies addressing tutor-student(s) video feedback in a higher education assessment context (universities, colleges, and professional schools).

We excluded papers using video feedback to promote psychomotor learning. A number of databases were searched, including the EBSCOhost online platform, Taylor and Francis Online, JSTOR, and the UK Higher Education Academy website. The snowball method was also used to extract potential resources from the reference lists of journal articles and grey literature found via database searches. Searching was concluded in December 2017.

Once database searches and snowballing were completed, the researchers met to develop an initial framework to guide their readings of the literature. The framework took the form of an extraction table; this format was chosen to facilitate the later processes of data extraction, analysis and synthesis. Categories in the extraction table were aligned with the overall research aims and phrased as questions, centring on issues such as definitions of video feedback, evidence of effectiveness, models and recommendations, and challenges.

The researchers then each read the same three papers and met to reassess the extraction table. As a result of the researchers’ preliminary reading, two new categories were added: one addressing professional development and training for markers, and the other resources and technical requirements. The researchers also discussed the definition of video feedback to use when reviewing the collated resources, and at an early stage it was determined that all formats of feedback comprised of a moving image would be considered ‘video feedback’. This scope was deliberately broad, to allow for a consensus on what constitutes video feedback to emerge from the literature.

One researcher (the first author) then identified the relevance of each resource, in relation to the research questions and the preliminary definition of video feedback, by reading the titles and abstracts. Resources which were deemed to fall outside any of the identified selection criteria were excluded from further reading and analysis.

Extraction

Using the extraction table as a guide, the included resources were critically read by the first author. Data was extracted and allocated to the appropriate category in the extraction table, and this process was repeated for each resource until all resources had been reviewed.

Analysis and synthesis

Once data extraction was completed, the research team convened in a series of meetings to discuss the data within the extraction table and identify themes. When considering the data, the researchers adopted Selwyn’s pessimistic stance on educational technology research, seeking to examine not only ‘how technology could and should be used … [but also] how technology is actually being used in practice’ (Citation2011, 715; original emphasis). Selwyn’s pessimism is particularly apposite when considering research into video feedback, which often exemplifies the ‘inherent positivity’ that Selwyn argues prevails throughout educational technology research (713).

Findings

We identified 37 resources, of which 33 are papers from peer-reviewed journals. In total, 12 papers discussing talking head feedback were identified, along with 21 instances of screencast feedback. Two papers combined screencast feedback with written feedback. In most cases, the authors referred to their feedback format primarily as ‘video feedback’. Owing to the relatively small number of peer-reviewed articles discussing video feedback, three instances of grey literature addressing this format were included (two project reports and a teaching resource based on personal experience), along with a newspaper interview with a video feedback researcher. Of these, three discuss talking head feedback, and one considers all three video feedback formats.

The reviewed studies utilised a broad range of research methods. Mixed methods research was the most frequently used method, with eleven studies. Seven studies also featured their authors’ personal experiences, most often in the form of personal reflections; personal reflections typically overlapped mixed methods and action research methodologies. Two studies considered actual feedback content (Moore and Filling Citation2012; Thomas, West, and Borup Citation2017) and another the effectiveness of feedback on students’ writing (Grigoryan Citation2017b). The findings are presented under the following headings: characteristics of video feedback; marker perceptions (advantages and challenges); student perceptions (advantages and challenges); and impact on student learning. (For a summary of the included literature, including video feedback format, assessment type, research design and sample size of each study, please refer to .)

Characteristics of video feedback

Studies seeking to determine the impact of the video format on feedback have typically concluded that video feedback differs from written feedback in a number of ways. The most consistent consensus amongst researchers is that, like audio feedback, video feedback provides students with more feedback and greater detail (Borup, West, and Thomas Citation2015; Crook et al. Citation2012; Elola and Oskoz Citation2016; Henderson and Phillips Citation2015; Lamey Citation2015; Mathieson Citation2012; Mayhew Citation2017; Orlando Citation2016; Parton, Crain-Dorough, and Hancock Citation2010; Vincelette and Bostic Citation2013). Episodes of video feedback were found to contain almost double or more the number of words than instances of written feedback (Anson et al. Citation2016; Mayhew Citation2017; Thomas, West, and Borup Citation2017). Moore and Filling (Citation2012) found that, when providing video feedback, markers made fewer brief suggestions and corrective comments, instead elaborating on points and providing specific details. Markers using video feedback were also more likely to provide more detailed comments on the positive aspects of students’ work (Lamey Citation2015; Parton, Crain-Dorough, and Hancock Citation2010; Thomas, West, and Borup Citation2017).

Some researchers suggest that video feedback fundamentally shifts the focus of feedback from the surface-level mechanics of writing to more substantive, global aspects of students’ performance (Henderson and Phillips Citation2015; Lamey Citation2015; Orlando Citation2016; Thompson and Lee Citation2012; Vincelette and Bostic Citation2013). Under this characterisation, surface-level feedback focuses on superficial or mechanical concerns such as spelling and grammar, syntax and referencing, while substantive feedback addresses deeper and more conceptual academic skills such as argument, analysis and synthesis. For instance, Lamey (Citation2015) found that he provided increased levels of substantive feedback on intellectual arguments when using talking head video feedback, and limited comments on mechanical areas such as syntax to one or two indicative examples. Thompson and Lee (Citation2012) found a similar shift amongst their students, noting that students made fewer surface-level edits than substantive changes after receiving screencast feedback. However, an analysis of markers’ feedback videos by Moore and Filling (Citation2012) found talking head feedback provides students with similar types of feedback to written feedback, such as making suggestions, corrections, and elaborating using examples. A consistent finding is that feedback remained focused on the task and did not seek to promote students’ abilities to self-regulate or to develop better evaluative judgement.

Researchers also report video feedback is more conversational in nature. Students who received combined screencast and text feedback perceived a closer relationship with their marker than those who received only text feedback, describing it as like meeting with the marker in person (Grigoryan Citation2017a). Anson et al. (Citation2016) found that students in their study repeatedly described screencast feedback as conversational and reminiscent of a face-to-face meeting. It is interesting that markers and students refer to video feedback as being more conversational, even though no actual conversation or dialogue takes place via the video medium. A possible explanation is that video reduces the perceived distance between marker and student, leading to the increased use of phatics (speech that serves a social function rather than conveying information), salutations and compliments (Thomas, West, and Borup Citation2017). Anson et al. (Citation2016) speculate that screencast feedback encourages students to view their tutors as ‘coaches’ rather than ‘judges’. In support of this, Borup, West, and Thomas (Citation2015) identified that video contained more relationship-building comments than text, with higher reference to student names. Two studies found that screencasts’ conversational tone had encouraged face-to-face interaction and improved marker-student relationships (Anson et al. Citation2016; Vincelette and Bostic Citation2013). While there were favourable links between screencast feedback and face-to-face meetings, the authors acknowledged that screencasts cannot replicate the dialogic elements of face-to-face feedback.

Hattie and Timperley (Citation2007) align feedforward with the question ‘Where to next?’, and argue that feedforward moves beyond diagnosing current performance to offer information on how to progress. Lamey (Citation2015) found that video feedback prompted him to offer more constructive comments and suggestions for future assignments, rather than simply itemising errors. Henderson and Phillips (Citation2015) speculate that feedforward is not more likely to occur in video feedback as a result of the medium itself; rather, the format provides markers with more time to offer suggestions for future work. However, the research designs of two studies explicitly aimed to increase marker awareness and provision of feedforward comments (Crook et al. Citation2010; Henderson and Phillips Citation2015), while a further study noted that feedforward comments form part of the institution’s distance learning protocol (Edwards, Dujardin, and Williams Citation2012). It is therefore possible that some effects of video feedback are due to the characteristics of feedback protocols and design at a study or institutional level, rather than the video medium itself.

Marker perceptions of video feedback

Advantages

Markers who have trialled video feedback are in general positive about the format. A significant advantage of video feedback for many markers is its potential to reduce marking times. Although markers are often sceptical that video feedback can improve their marking efficiency, fearing that the process will be complex and time-consuming, research suggests that markers find that the time required to produce video feedback is either less than or comparable to that required for written feedback (Crook et al. Citation2012; Henderson and Phillips Citation2015; Elola and Oskoz Citation2016; Jones, Georghiades, and Gunson Citation2012; Lamey Citation2015; Moore and Filling Citation2012; West and Turner Citation2016). For instance, Henderson and Phillips (Citation2015) found that providing video feedback took roughly half the time needed to produce written feedback, although Lamey (Citation2015) found that to do so requires ‘conscious effort’ (694) by the marker to avoid multiple takes and detailed notetaking. However, some studies of screencast feedback have concluded the format is unsuited to large cohorts due to lengthy production times (Mathieson Citation2012; Marriot and Teoh Citation2012).

Far from the ‘characteristic dread or sufferance’ reported by many markers providing written feedback (Henderson and Phillips Citation2015, 63), many studies have found that markers enjoy providing video feedback and that it may prompt a renewed enthusiasm for providing feedback (Henderson and Phillips Citation2015; Lamey Citation2015; Parton, Crain-Dorough, and Hancock Citation2010). Henderson and Phillips (Citation2015), reflecting on their own experiences providing video feedback, felt that the video format allowed them to discuss arguments and issues ordinarily too complex to address through written feedback, and concluded that video feedback ‘no longer felt like an exercise in defending a grade … but rather providing valuable advice’ (63). Markers in a number of studies stated that they would consider using video feedback in future (Crook et al. Citation2012; Harper, Green, and Fernandez-Toro Citation2012; Orlando Citation2016; Parton, Crain-Dorough, and Hancock Citation2010; West and Turner Citation2016), with six of eight markers surveyed by Crook et al. (Citation2012) reporting that using video feedback had ‘instigated positive changes in the ways in which [they] thought about and developed feedback’ (395).

Other advantages of video feedback noted by markers lie in the affordances offered by the video format. In particular, the highly personalised nature of video feedback allows markers to more overtly address students as individuals, transforming feedback into a communication which can help students feel recognised and valued rather than simply a name on a list (Borup et al. Citation2014; Harper, Green, and Fernandez-Toro Citation2012; Klappa Citation2015; Vincelette and Bostic Citation2013). The richness of video feedback is felt to enhance the impact of personalising strategies, such that students could more easily recognise the authenticity of emotional responses through a visual medium (Borup et al. Citation2014). Similarly, markers from several studies reported that recording screencast feedback had encouraged them to speak more informally, and felt as if they were speaking to the student in person (Harper, Green, and Fernandez-Toro Citation2012; Orlando Citation2016; Vincelette and Bostic Citation2013). A further advantage highlighted by markers is that video feedback can be saved and replayed by students as required, unlike a student-marker meeting (Crook et al. Citation2012; Klappa Citation2015; Séror Citation2012) – unless the meeting is also recorded.

Challenges and concerns

Markers have reported some frustrations with the recording process, such as the need for a quiet location and the inability to pause while recording or edit video feedback once recorded (Borup et al. Citation2014; Parton, Crain-Dorough, and Hancock Citation2010; Vincelette and Bostic Citation2013). For instance, feedback may need to be rerecorded due to interruptions during the recording process or marker dissatisfaction with feedback quality. However, markers in Borup et al.’s (Citation2014) study grew less likely to rerecord videos, becoming more comfortable retaining small errors and considering mistakes to make their feedback seem ‘more human’ (241). In a more recent study, Borup, West, and Thomas (Citation2015) reported that five of nine participating markers reported technical problems and seven of nine considered text feedback to be more efficient and convenient. This may be indicative of the challenges of scaling up, where teaching researchers or markers beyond champions of the format are involved in the feedback process. A number of authors also highlight that data protection and maintaining student confidentiality are important considerations when distributing video feedback to individual students (Klappa Citation2015; Lamey Citation2015; Vincelette and Bostic Citation2013).

While the format of video feedback provides a unique opportunity to transmit emotional expression, it is also a potential pitfall. Students from a number of studies have commented that hearing their marker’s tone of voice or emphasis assisted their understanding and positive interpretation of their feedback, but a marker’s tone may also transmit unwelcome emotions (Henderson and Phillips Citation2015; Lamey Citation2015; Moore and Filling Citation2012). Markers sometimes had difficulty concealing emotions that they did not wish to convey to students, such as disappointment, frustration, or formulaic praise (Borup et al. Citation2014). Conversely, Thomas, West, and Borup (Citation2017) found that written feedback comments contained more expression of emotions than video, which may have been due to difficulties in assigning codes to facial expressions or vocal tones. Regardless, none of the studies enabled students to convey their emotional responses back to the markers who had provided the feedback.

Student perceptions of video feedback

Advantages

Most studies show students respond positively to video feedback, with the format possessing a high degree of acceptability. Students typically like video feedback, consider it to be beneficial to their learning, and report a preference for video feedback over written feedback (Borup et al. Citation2014; Chiang Citation2009; Crook et al. Citation2012; Grigoryan Citation2017a; Henderson and Phillips Citation2015; Jones, Georghiades, and Gunson Citation2012; Lamey Citation2015; Mayhew Citation2017; Moore and Filling Citation2012; Parton, Crain-Dorough, and Hancock Citation2010; Vincelette and Bostic Citation2013).

Studies show that most students value improved clarity and ease of understanding as the key strength of video feedback. Students consider that the video format affords a clearer understanding of marker comments and helps avoid misinterpretations, with the visual and aural cues communicated in video significantly improving clarity and detail and reducing the ambiguity of feedback information (Anson et al. Citation2016; Borup et al. Citation2014; Borup, West, and Thomas Citation2015; Crook et al. Citation2012; Edwards, Dujardin, and Williams Citation2012; Elola and Oskoz Citation2016; Harper, Green, and Fernandez-Toro Citation2012; Henderson and Phillips Citation2015; Mayhew Citation2017; McCarthy Citation2015; Moore and Filling Citation2012; Orlando Citation2016). Students also consider video feedback to be more extensive/elaborate and informative than written and oral feedback (Anson et al. Citation2016; Borup, West, and Thomas Citation2015; Crook et al. Citation2012; Lamey Citation2015; Mayhew Citation2017; Moore and Filling Citation2012; Vincelette and Bostic Citation2013). Students surveyed by Crook et al. (Citation2012) felt that key points within feedback were better emphasised through the video format, and that video feedback helped them to visualise a task or process through the incorporation of demonstrations and diagrams. Students also value being able to hear their marker’s tone of voice, reporting that this assists their understanding of their marker’s expectations, helps them to prioritise revision, better emphasises positive achievements, and enhances motivation (Anson et al. Citation2016; Harper, Green, and Fernandez-Toro Citation2012).

A number of studies have found that positive student perceptions of video feedback are further reflected by higher rates of student engagement with video feedback than with its written counterpart. West and Turner (Citation2016) found that 55 per cent of 142 students reported that they had spent more time reviewing their individualised screencast feedback than they would ordinarily on written feedback, a finding echoed by Mayhew (Citation2017). Several studies also found that students viewed video feedback multiple times, often revising their assignment or taking notes while watching (Crook et al. Citation2012; Grigoryan Citation2017a; Harper, Green, and Fernandez-Toro Citation2012; Moore and Filling Citation2012; Parton, Crain-Dorough, and Hancock Citation2010; Vincelette and Bostic Citation2013). Students appreciated the ability to pause, repeat and revisit video feedback, and considered this flexibility to be a significant advantage of the format (Anson et al. Citation2016; Crook et al. Citation2012; Henderson and Phillips Citation2015; Jones, Georghiades, and Gunson Citation2012; Lamey Citation2015; Mathisen Citation2012). Henderson and Phillips (Citation2015) found that a number of students felt video feedback had led them to reflect on and critically evaluate their work, while nearly 60 per cent of 105 students in Crook et al.’s (Citation2012) study reported discussing video feedback with other students. What aspects of the feedback were discussed and why was not clear.

Improved rates of student engagement with video feedback have been linked to student perceptions that the format is more personalised and provides students with a stronger connection to their markers. As Henderson and Phillips (Citation2015) have shown, students consider video feedback to offer more personalised and individualised feedback, where they felt recognised and valued as individuals, with similar perceptions identified in a number of other studies (Anson et al. Citation2016; Borup et al. Citation2014; Crook et al. Citation2012; Jones, Georghiades, and Gunson Citation2012; Klappa Citation2015; Lamey Citation2015; Mathieson Citation2012; Orlando Citation2016). Such findings are consistent with markers’ perceptions that the video format enhances personalisation (Borup et al. Citation2014; Harper, Green, and Fernandez-Toro Citation2012; Klappa Citation2015). Studies have also found that students consider video feedback to be less easily genericised than written feedback (for instance, drawn from comment banks) and therefore more personalised (Borup et al. Citation2014; Henderson and Phillips Citation2015), helping to foster a personal connection with their markers by simulating a face-to-face meeting (Anson et al. Citation2016; Crook et al. Citation2012; Elola and Oskoz Citation2016; Grigoryan Citation2017a; Henderson and Phillips Citation2015; Klappa Citation2015; Moore and Filling Citation2012). Students surveyed by Henderson and Phillips (Citation2015) commented that video feedback created the sense that their marker was speaking directly to them. Interestingly, even video feedback provided via a single video to a whole cohort may be more highly valued than written feedback to individual students, with research showing that students are more likely to engage with generic video feedback than with written and audio feedback (Cann Citation2007; Crook et al. Citation2012). However, although students may find video feedback more engaging and perceive it as conversational and personalised, this may not translate into action or changed behaviours – a finding articulated by students surveyed by Thomas, West, and Borup (Citation2017), who reported that they were more likely to respond to written feedback. Based on our findings, it remains unclear what may motivate students to engage more with one medium over another.

Challenges and concerns

Several studies have identified that many students prefer written to video feedback. Students in Borup, West, and Thomas’s (Citation2015) study overwhelmingly preferred text to video feedback (64.3% to 14.3% respectively) and, similarly, Orlando (Citation2016) found that nearly two thirds of students preferred text over screencast feedback. Students felt it was easier to access text, more efficient to skim through comments, and more concise (Borup, West, and Thomas Citation2015). Students who preferred text to screencast feedback also felt it was easier and quicker to refer to at a later date (Edwards, Dujardin, and Williams Citation2012; Orlando Citation2016). However, Orlando (Citation2016) speculates that mature-aged students may be less comfortable with video feedback than younger students, who are more likely to be so-called ‘digital natives’. This might be considered an example of the techno-romanticism (Selwyn Citation2014) driving research in the educational technology field. Critical reviews have previously rejected the ‘digital native’ narrative because it advances simplistic dichotomies in what is a complex and nuanced field (Bennett and Maton Citation2010).

A number of studies have found that students may experience negative emotional reactions when watching video feedback, from relatively minor feelings of awkwardness to anxiety and helplessness (Borup et al. Citation2014; Edwards, Dujardin, and Williams Citation2012; Hall, Tracy, and Lamey Citation2016; Henderson and Phillips Citation2015; Lamey Citation2015; Mayhew Citation2017). Lamey (Citation2015) suggests that such awkwardness may stem from students being unused to the video format and is likely to diminish with repeated exposure to the format. While the personal nature of video feedback is often cited as a positive attribute, a small number of students in Borup et al.’s (Citation2014) study found talking head video feedback to be too personal, with eye contact and direct attention from their marker making them feel uncomfortable. Students also report that seeing and hearing their marker deliver feedback can be confronting, one-sided, and dismissive, particularly due to its simulation of a face-to-face meeting (Grigoryan Citation2017a; Henderson and Phillips Citation2015; Lamey Citation2015). For one student, the sense of a one-sided conversation to which they could not respond left them feeling ‘especially helpless’ (Lamey Citation2015, 698). Feedback dialogue or an opportunity for students to respond to feedback was not deliberately incorporated into the feedback processes of any studies we reviewed.

A common criticism of talking head video feedback from students is that it can be difficult to match comments with the sections of the assignment to which they refer (Henderson and Phillips Citation2015; Lamey Citation2015). Similarly, in one study of screencast feedback, a student hadn’t realised that the cursor was being used as a pointer (Elola and Oskoz Citation2016). Lamey (Citation2015) also found that 36 per cent of students surveyed commented critically that video feedback focuses less than written feedback on individual passages in an essay.

An overview of the advantages of video feedback is provided in , while provides a summary of the feedback format’s limitations.

Table 1. Advantages of video feedback.

Table 2. Limitations of video feedback.

Video feedback and student learning outcomes

Few studies have considered the impact of video feedback on student learning and performance, beyond a small number of student self-evaluations. To apply Boud and Molloy’s (Citation2013) analysis of feedback practice, research into video feedback has thus far focused on the delivery of the feedback and its reception by students on a superficial level (positive/negative response), rather than on its impact on student learning – that is, its ability to change what students do – which they argue is feedback’s most fundamental role. As Nicol (Citation2013) observes, changes to feedback that result in improved student perceptions do not necessarily translate to improved student learning outcomes. Nevertheless, students tend to be positive as to the perceived effectiveness of video feedback on their learning (McCarthy Citation2015; Parton, Crain-Dorough, and Hancock Citation2010; Vincelette and Bostic Citation2013). Markers from Orlando’s (Citation2016) study felt that student work had improved after receiving screencast feedback but, again, changes in student performance were not evaluated.

Research by Moore and Filling (Citation2012) provides some further, if still limited, insights into the potential effect of video feedback on student outcomes. Moore and Filling studied the provision of video feedback on a series of three student drafts, comparatively analysing revisions and overall changes in the quality of students’ work to determine the impact of the video format. Moore and Filling found that the majority of student drafts improved significantly from the first to the final draft, and students reported that the video feedback had motivated them to undertake meaningful revisions. However, it is not clear from the study, or that of Denton (Citation2014), whether the quality of students’ work benefitted directly from the video format, or simply from the process of receiving feedback with the opportunity to revise their work prior to final submission. Ali (Citation2016) found that an experimental group of English-language students receiving screencast feedback on the content, organisation and structure of their writing achieved higher mean scores for overall writing, content, organisation and structure than the control group which received written feedback. By contrast, Turner and West (Citation2013) found screencast feedback had little impact, either positive or negative, on student grades. Grigoryan (Citation2017b) offers the most robust study exploring efficacy, comparing the impact of written comments versus screencast and written comments in improving students’ writing. She found no statistical differences between the two groups, with a trend (and moderate effect size) noted for improved essay content and overall final draft quality for the experimental (screencast and text) group. As such, current findings on the impact of video feedback on student learning are limited and inconclusive.

Discussion

Current evidence suggests that video feedback is liked by students, has positive effects on student engagement with feedback, and is beneficial in strengthening student-marker relationships; in short, that video feedback is a promising alternative to the traditional written feedback. Within an information transmission view of feedback, video feedback addresses a number of advocated delivery criteria such as specific, personalised feedback with the potential to enable more information to be transmitted (including feedforward comments) due to increased efficiency. It is superior to written feedback in terms of the depth and richness of relational cues provided, giving a more conversational feel. It seems to be less effortful for staff to produce and for students to view, or at least for the sample reviewed here. Such affordances of video feedback accord with principles commonly reported in the feedback literature, including delivering high quality feedback information and encouraging positive motivation and self-esteem (Nicol and Macfarlane-Dick Citation2006). The use of video feedback has also led some researchers (e.g. Thomas, West, and Borup Citation2017) to focus more closely on markers’ social presence – that is, the relational aspects of feedback – through Garrison’s (Citation2007) Community of Inquiry framework, which removes feedback from the realm of decontextualised information given to a student. However, addressing the remaining principles from the feedback literature – promoting self-assessment, feedback dialogue, understanding of standards, and opportunities to close the gap – requires deliberate curriculum design that goes beyond a change in delivery medium (Boud and Molloy Citation2013).

Positive reports of video feedback may make it appear that video is a panacea; however, viewing the existing research through critical and sociocultural lenses offers an alternate reading, in which researchers have – for the most part – merely substituted one medium (written) for another (video). Previous studies have identified no effect on learning as the result of a shift in media (Clark Citation1983). The purpose of feedback remains oriented towards the particular task and does not align with a sustainable assessment agenda. While video feedback may offer certain affordances (personalisation, visualisation, etc.), the one-way transmission of information restricts the role of students to viewer, and limits their agentic participation in the process. Essentially, current video feedback formats create an illusion of dialogue – what Harper, Green, and Fernandez-Toro (Citation2012) refer to as ‘an imagined dialogue’ – but in fact offer limited or no avenues for students to respond to marker comments. High value feedback requires us to take into account the cognitive, structural and social-affective dimensions required for feedback dialogue (Ajjawi and Boud Citation2018). We address each of these in turn.

A problem common to all feedback formats is that of assuring shared meaning, with students’ interpretations often differing from markers’ intended meanings (Boud and Molloy Citation2013). This problem is perpetuated when students do not play an active role in the feedback process. Students are unable to seek clarification or respond to the markers’ feedback to defend their work. While students may have evaluated the quality of their work prior to submission, video feedback as currently utilised does not allow the kind of back-and-forth discussion that helps learners to develop an understanding of standards and calibrate their evaluative judgement. Ways of addressing the cognitive dimension include researching the effect of inviting students to articulate their understanding of feedback comments – for example, via video-recorded self-explanation (Chi et al. Citation1994) – or the use of a feedback journal with dialogue (Barton et al. Citation2016). These strategies enable markers and students to ask questions, encourage students to elaborate and reframe their ideas, and prompt students to critically evaluate their work. The structural dimension of feedback design dovetails here, which requires consideration about when in the learning period students engage in feedback, and what they are invited to do with it. Iterative feedback loops provide students with an opportunity to discuss their emerging interpretations of feedback comments, in light of their work and expected standards, and offer the chance for students to resubmit revised work. This approach characterises students’ engagement as part of a learning trajectory and requires opportunities for sustained longitudinal dialogue. Such structural design elements were largely lacking in our sample.

We find the social-affective dimension most intriguing, as we had anticipated one of the affordances of video feedback was increased relational cues and reduced perceptions of distance between the student and marker. While video feedback performs some elements of a conversation – such as spoken words, acknowledgement of the other, facial expressions, and tone of voice – key elements of human interaction remain missing. Perhaps student perceptions that screencast feedback is a dialogic interaction arise from a sense that their written words (e.g. an essay) are their contribution to an exchange, which their marker then ‘replies’ to through screencast feedback. The perceived dialogic nature of screencasts highlights the relational aspects students seek from feedback interactions – students felt that their markers wanted to help them by using an interactive communication process. Such positive credibility judgements by the learner form one dimension of an educational alliance with their tutor, alongside shared goals and tasks (Telio, Ajjawi, and Regehr Citation2015). When students judge an educational alliance to be strong – that is, that their marker is genuinely interested and invested in their learning – they report making more effort to engage with a particular piece of feedback at its moment of delivery, and are also more likely to positively engage in future feedback interactions with the same marker (Telio, Regehr, and Ajjawi Citation2016).

Carless (Citation2013) discusses the importance of trust in feedback relationships for fostering students’ self-disclosure and engagement with feedback. The presence of visual and aural cues – including facial expressions, hand gestures, natural pauses, and intonation – conveys to students information about their marker’s enthusiasm, emotions, humour and personality. It is therefore possible that, by promoting the communication of these relational features, the video medium indirectly conveys to students that their marker is invested in their learning – thus strengthening perceptions of the educational alliance and influencing positive feedback behaviours. However, although video feedback is richer in relational cues, the absence of attention to cognitive and structural cues, and the maintenance of an information transmission conceptualisation (hence privileging correction in the immediate task) highlights a limiting discourse in the existing video feedback research.

Limitations of existing research and a future research agenda

Our review highlights that the impact of video feedback on student engagement, learning and performance remains underexplored, and there are a number of caveats on existing research. In the first instance, as noted by Crook et al. (Citation2012), the enthusiasm of markers for video feedback could be heightened by the novelty of the feedback format. Secondly, the researcher(s) themselves may also be a significant or sole contributor of data, primarily through reflections or observations of their experiences with video feedback (for instance, see Hall, Tracy, and Lamey Citation2016; Henderson and Phillips Citation2015; Klappa Citation2015; Lamey Citation2015; Parton, Crain-Dorough, and Hancock Citation2010). This involvement of the researcher in the feedback process may influence the data and the types of studies reported, and may also account for the high levels of marker enthusiasm for the video feedback format. Where marker perception data is drawn from multiple markers, the sample size typically remains very small (see Crook et al. Citation2012). Therefore, studies reporting marker perceptions may not provide a reliable indicator of broader marker attitudes towards video feedback.

Another limitation of research conducted into video feedback is its reliance on participant-reported data (Borup, West, and Graham Citation2012). Student and marker perceptions are the primary data source, from which conclusions are drawn on the impact of video feedback; this is a critical limitation noted in the assessment and feedback research field more generally (Jackel et al. Citation2017). Such perception-based research plays an important preliminary role, but must ultimately give way to more empirical examinations of video feedback’s potential to impact student learning and enhance performance (Borup, West, and Graham Citation2012).

A further limitation of present research is its small scope, with studies typically comprised of small student cohorts and limited marker numbers (for instance, see McCarthy Citation2015; Moore and Filling Citation2012; Parton, Crain-Dorough, and Hancock Citation2010). Small sample sizes may limit a study’s transferability, and within small cohorts any impact of video feedback may be linked to markers’ familiarity with their students and their previous work. In addition, the feasibility of providing video feedback to large student cohorts remains largely unexamined in the current literature. Further research is needed to determine the viability of video feedback for large student cohorts, and to gauge the experience of providing video feedback for markers not directly engaged in conducting research on (and often also advocating for) video feedback.

There is a significant need for researchers to assess the extent of video feedback’s impact on student learning and performance, as the effect of the format on student learning outcomes is currently undetermined. Indeed, if, as Boud and Molloy (Citation2013) contend, the primary role of feedback is to ‘change what students can do’, then whether video feedback impacts student learning, and how it compares to other feedback formats in this regard, is surely the foremost criterion in judging the merit – or otherwise – of the video format. Crook et al. (Citation2012) also highlight the need to comparatively analyse the subsequent performance of individual students engaging with video and other feedback formats on similar assignments. However, this isn’t a matter of substituting one medium for another, but a more considered design approach taking into account the situated feedback process through the interplay of cognitive, structural and social-affective dimensions.

Limitations of the current review

Although we sampled broadly and employed a snowball method, this review is not exhaustive. Rather, it is intended as a comprehensive scoping review to identify the diversity of existing research and video feedback conceptualisations, and to synthesise key gaps in the literature to inform a future research agenda. Given the relatively small number of papers found, we did not formally analyse the quality of each study in order to exclude papers, instead choosing to be inclusive of grey literature and project reports.

Conclusion

While the research considered in this review has found that the medium of video feedback has a generally high level of acceptability to students and markers, it has not yet been established whether the format improves students’ learning and performance and, if so, how this impact compares with other feedback formats. Video feedback also continues to perpetuate a monologic, ‘information transmission’ approach to feedback, albeit in a novel guise that gives the suggestion of dialogue. However, the potential scope for empirical research and theorisation around video feedback in relation to media richness, social presence, and the educational alliance remains considerable, particularly when comparing video feedback – talking head, screencast, and combination screencast – with its audio and written counterparts.

Supplemental material

Supplemental Material

Download MS Word (36.7 KB)

Disclosure statement

No potential conflict of interest was reported by the authors.

References

Appendix A: Summary table of included literature.